report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
In recent years, Congress has passed legislation designed to strengthen the linkage between SES performance and pay. Congress established a new performance-based pay system for the SES and permitted agencies with SES appraisal systems, which have been certified as making meaningful distinctions based on relative performance, to apply a higher maximum SES pay rate and a higher annual cap on total SES compensation. We have testified that such SES and senior-level employee performance-based pay systems serve as an important step for agencies in creating alignment or “line of sight” between executives’ performance and organizational results. By 2004, an agency could apply a higher cap on SES pay and total compensation if OPM certifies and OMB concurs that the agency’s performance management system, as designed and applied, aligns individual performance expectations with the mission and goals of the organization and makes meaningful distinctions in performance. Since 2004, VA has received approval to increase the cap on SES pay and total compensation, which includes bonuses. By law, only career SES appointees are eligible for SES bonuses. As stated previously, agencies with certified senior performance appraisal systems are permitted higher caps on SES base pay and total compensation. With a certified system, for 2006, an agency was authorized to increase SES base pay to $165,200 (Level II of the Executive Schedule) and total compensation to $212,100 (the total annual compensation payable to the Vice President). Those agencies without certified systems for 2006 were limited to a cap of $152,000 for base pay (Level III of the Executive Schedule) and $183,500 (Level I of the Executive Schedule) for total compensation. SES performance bonuses are included in SES aggregate total compensation. Agencies are permitted to award bonuses from 5 to 20 percent of an executive’s rate of basic pay from a pool that cannot exceed the greater of 10 percent of the aggregate rate of basic pay for the agency’s career SES appointees for the year preceding, or 20 percent of the average annual rates of basic pay to career SES members for the year preceding. VA requires that each SES member have an executive performance plan or contract in place for the appraisal year. According to VA’s policy, the plan must reflect measures that balance organizational results with customer satisfaction, employee perspectives, and other appropriate measures. The plan is to be based on the duties and responsibilities established for the position and also reflect responsibility for accomplishment of agency goals and objectives, specifying the individual and organizational performance or results to be achieved for each element. Toward the end of the appraisal period, each executive is to prepare a self-assessment relative to the job requirements in the approved performance plan, and his or her supervisor then rates the executive on each element and provides a summary rating. Specifically, according to VA’s policy on the rating process, the rater is to assess the accomplishment of each established performance requirement, consider the impact of the individual requirement on overall performance of the element, and assign one achievement level for each element. The VA rating is a written record of the appraisal of each critical and other performance element and the assignment of a summary rating level by the rater. The summary of each SES member rating passes to the appropriate reviewing official (if applicable) and PRBs for consideration. VA uses four PRBs to review and prepare recommendations on SES member ratings, awards, and pay adjustments: Veterans Affairs, Veterans Health Administration, Veterans Benefits Administration, and Office of Inspector General. The Veterans Affairs PRB has a dual role in VA in that it functions as a PRB for SES members who work for VA’s central offices, such as the Office of the Assistant Secretary for Management and the Office of the Assistant Secretary for Policy and Planning, and those employed by the National Cemetery Administration. It also reviews the policies, procedures, and recommendations from the Veterans Health Administration and Veterans Benefits Administration PRBs. The Secretary appoints members of three of the four PRBs on an annual basis; members of the Office of Inspector General PRB are appointed by the VA Inspector General. VA’s PRBs must have three or more members appointed by the agency head or Inspector General for the Office of Inspector General PRB and can include all types of federal executives from within and outside the agency. As required by OPM, when appraising career appointees or recommending performance awards for career appointees, more than one-half of the PRB membership must be career SES appointees. Federal law prohibits PRB members from taking part in any PRB deliberations involving their own appraisals. Appointments to PRBs must also be published in the Federal Register. According to a VA official in the Office of Human Resources and Administration, appointments are made on basis of the position held, and consideration is given to those positions where the holder would have knowledge about the broadest group of executives. Typically, the same VA positions are represented on the PRB each year, and there is no limit on the number of times a person can be appointed to a PRB. VA’s PRBs vary in size, composition, and number of SES members considered for bonuses. For example, in 2006, VA’s Veterans Health Administration PRB was composed of 18 members and made recommendations on 139 SES members while its Veterans Benefits Administration PRB was composed of 7 members and made recommendations on 50 SES members. In 2006, six PRB members sat on multiple PRBs, and 1 member, the Deputy Chief of Staff, sat on three PRBs—the Veterans Affairs, Veterans Health Administration, and Veterans Benefits Administration PRBs. With the exception of the Office of Inspector General PBR, members of PRBs are all departmental employees, a practice that is generally consistent across cabinet-level departments. The Office of Inspector General PRB is composed of 3 external members—officials from other federal agencies’ offices of inspector generals—which is generally consistent with PRBs for other federal offices of inspector general. Under VA’s policy, each PRB develops its own operating procedures for reviewing ratings and preparing recommendations. The Veterans Health Administration and Veterans Benefits Administration PRBs are to submit their procedures to the chairperson of the Veterans Affairs PRB for approval and are to include a summary of procedures used to ensure that PRB members do not participate in recommending performance ratings for themselves or their supervisors. VA policy requires any SES member who wishes to be considered for a bonus to submit a two-page justification based on his or her performance plan addressing how individual accomplishments contribute towards organizational and departmental goals, as well as appropriate equal employment opportunity and President’s Management Agenda accomplishments. While federal law and OPM regulations permit career SES members rated fully successful or higher to be awarded bonuses, VA’s policy calls for bonuses to generally be awarded to only those rated outstanding or excellent and who have demonstrated significant individual and organizational achievements during the appraisal period. Beyond these policies, each PRB determines how it will make its recommendations. For example, a VA official from its Office of Human Resources and Administration told us that the Veterans Affairs PRB bases it’s bonus recommendations on an array of the numerical scores assigned based on the executive core qualifications. The information that each PRB receives from its component units also varies. For example, the Veterans Benefits Administration PRB members receive ratings and recommended pay adjustments and bonus amounts from Veterans Benefits Administration units. VA policy requires formal minutes of all PRB meetings that are to be maintained for 5 years. The official from the Office of Human Resources and Administration told us that the minutes are limited to decisions made, such as the recommended bonus amount for each SES member considered, and generally do not capture the deliberative process leading to such decisions. Data provided by VA on one VA component—the Veterans Integrated Services Network—showed that of the bonuses proposed for fiscal year 2006, the Veterans Health Administration PRB decreased 45 and increased 9 of the bonuses initially proposed to that PRB and left the amounts of 64 unchanged. At the conclusion of their deliberations, the Veterans Health Administration and Veterans Benefits Administration PRBs send their recommendations to the Under Secretary for Health and Under Secretary for Benefits, respectively, who, at their sole discretion, may modify the recommendations for SES members under their authority. No documentation of the rationale for modifications is required. The recommendations, as modified, are then forwarded to the chairperson of the Veterans Affairs PRB, who reviews the decisions for apparent anomalies, such as awarding bonuses that exceed maximum amounts. The chairperson of the Veterans Affairs PRB then forwards the recommendations from the Veterans Health Administration, Veterans Benefits Administration, and Veterans Affairs PRBs to the Secretary for approval. The Secretary makes final determinations for SES member performance bonuses, with the exception of SES members in VA’s Office of Inspector General. Recommendations from the Office of Inspector General PRB are sent directly to the VA Inspector General for final decision without review by the Veterans Affairs PRB or approval by the Secretary. The Secretary has sole discretion in accepting or rejecting the recommendations of the PRBs. According to an official in the Office of Human Resources and Administration, the Secretary modified 1 recommendation in 2006, but a prior secretary modified over 30 in one year. Recommendations for bonuses for members of the Veterans Affairs, Veterans Health Administration, and Veterans Benefits Administration PRBs are made after the PRBs conclude their work. The highest-level executives of each board rank the members of their respective PRBs and make recommendations, which are submitted to the Secretary. The Secretary determines any bonuses for the highest-level executives of the boards. In 2006, VA’s bonus pool was $3,751,630, or 9 percent of the aggregate basic pay of its SES members in 2005. VA awarded an average of $16,606 in bonuses in fiscal year 2006 to 87 percent of its career SES members. At headquarters, approximately 82 percent of career SES members received bonuses and 90 percent received bonuses in the field. Additionally, those in headquarters were awarded an average of about $4,000 more in bonuses than the career SES members in field locations. Table 1 shows the average bonus amount, percentage receiving bonuses, and total rated at VA among career SES members and by headquarters and field locations for 2004 through 2006. In 2005, according to OPM’s Report on Senior Executive Pay for Performance for Fiscal Year 2005, the most recent report available, VA awarded higher average bonuses to its career SES than any other cabinet- level department. OPM data show that six other cabinet-level departments awarded bonuses to a higher percentage of their career SES members. When asked about possible reasons for VA’s high average bonus award, a VA official in the Office of Human Resources and Administration cited the outstanding performance of VA’s three organizations and the amount allocated to SES member bonuses. Both OPM and OMB play a role in the review of agency’s senior performance appraisal systems and have jointly developed certification criteria. OPM issues guidance each year to help agencies improve the development of their SES performance appraisal systems and also reviews agency certification submissions to ensure they meet specified criteria. To make its own determination, OMB examines agency’s performance appraisal systems against the certification criteria, primarily considering measures of overall agency performance, such as an agency’s results of a Program Assessment Rating Tool review or President’s Management Agenda results. Specifically, to qualify for the use of SES pay flexibilities, OPM and OMB evaluate agencies’ senior performance appraisal systems against nine certification criteria. These certification criteria are broad principles that position agencies to use their pay systems strategically to support the development of a stronger performance culture and the attainment of the agencies’ missions, goals, and objectives. These are alignment, consultation, results, balance, assessments and guidelines, oversight, accountability, performance, and pay differentiation. See appendix I for a description of the certification criteria. There are two levels of performance appraisal system certification available to agencies: full and provisional. To receive full certification, the design of the systems must meet the nine certification criteria, and agencies must, in the judgment of OPM and with concurrence from OMB, provide documentation of prior performance ratings to demonstrate compliance with the criteria. Full certification lasts for 2 calendar years. Provisionally certified agencies are also granted the authority to apply higher caps on SES pay and total compensation just as those with fully certified systems are, even though agencies with provisional certification do not meet all nine of the certification criteria. Provisional certification lasts for 1 calendar year. According to OPM, the regulations were designed to cover initial implementation of the certification process. Now that all agencies have been under the system, all nine criteria must be met for an agency to be certified, even provisionally. According to OPM, for an agency to receive full certification in 2007, it must show that it has 2 years of making performance differentiation in ratings, pay, and award; and that the agency performance plans fully met all the criteria without requiring extensive revision. After OMB concurrence, the Director of OPM certifies the agency’s performance appraisal system and formally notifies the agency with a letter specifying provisional, full certification, or no certification. Of the 42 performance appraisal systems that were certified in 2006, only the Department of Labor’s system received full certification. According to OPM’s Web site, as of June 5, 2007, four agencies had received full certification of their senior performance appraisal systems—the Department of Commerce for 2007 through 2008, the Department of Labor for 2006 through 2007, the Federal Communications Commission for 2007 through 2008, and the Federal Energy Regulatory Commission for 2007 through 2008. If provisional or no certification is recommended, the letter from OPM provides the agency with specific areas of concern identified through the review process. These comments may direct an agency to focus more on making meaningful distinctions in performance or improving the type of performance measures used to evaluate SES members. For example, in OPM’s 2007 certification guidance, the OPM Director asked agencies to place more emphasis on achieving measurable results, noting that many plans often fall short of identifying the measures used to determine whether results are achieved. In addition, OPM asked agencies to highlight in their 2007 certification requests any description or evidence of improvements made as a result of comments from OPM or OMB in response to the agency’s 2006 certification submission. VA received provisional certification for each of the years 2004 through 2006. In 2006, the letter from OPM to VA discussing its decision to grant the VA provisional certification rather than full certification, OPM stated that while the VA “system met certification criteria, clear alignment and measurable results must be evident in all plans across the entire agency.” In addition, OPM said that it expected to see “well over 50 percent of an executive’s performance plan focused on business results” and that VA “needs to ensure its 2007 executive performance plans weight business results appropriately.” VA officials told us that the 2007 submission is in draft and they expect to submit it to OPM by the June 30, 2007, deadline. Our preliminary review of VA’s requirements for performance plans contained in its 2006 submission and 2007 draft submission show that VA made changes to the policy requirements for its performance plans to reflect a greater emphasis on measurable results. Specifically, the elements of the job requirement in the 2007 policies provides that each critical element and performance element will be weighted, which was not previously required in 2006. These performance requirements, according to the policy, will be described in terms of specific result(s) with metrics that the SES member must accomplish for the agency to achieve its annual performance goals and represent at least 60 percent of the overall weight of the performance plan. The policy further states that the expected results should be specific, measurable, and aggressive yet achievable, results-oriented, and time-based. Responding to concerns expressed by members of Congress and media reports about SES member bonuses, VA’s Secretary recently requested that OPM review its performance management program for senior executives to ensure that its processes are consistent with governing statutes and OPM regulations and guidance. VA officials indicated that while OPM’s review encompasses some of the same areas as those required for 2007 certification, VA requested a separate report from OPM. We have stated that it is important for OPM to continue to carefully monitor the implementation of agencies’ systems and the certification process with the goal of helping all agencies to receive full certification of their system. Requiring agencies with provisional certification to reapply annually rather than every 2 years helps to ensure continued progress in fully meeting congressional intent in authorizing the new performance- based pay system. VA has achieved provisional certification of its SES performance management system for 2004 through 2006. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you have. For further information regarding this statement, please contact J. Christopher Mihm at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this statement included George Stalcup, Director; Belva Martin, Assistant Director; Carole J. Cimitile; Karin Fangman; Tamara F. Stenzel; and Greg Wilmoth. Individual performance expectations must be linked to or derived from the agency’s mission, strategic goals, program/policy objectives, and/or annual performance plan. Individual performance expectations are developed with senior employee involvement and must be communicated at the beginning of the appraisal cycle. Individual expectations describe performance that is measurable, demonstrable, or observable, focusing on organizational outputs and outcomes, policy/program objectives, milestones, and so forth. Individual performance expectations must include measures of results, employee and customer/stakeholder satisfaction, and competencies or behaviors that contribute to outstanding performance. The agency head or a designee provides assessments of the performance of the agency overall, as well as each of its major program and functional areas, such as reports of agency’s goals and other program performance measures and indicators, and evaluation guidelines based, in part, upon those assessments to senior employees, and appropriate senior employee rating and reviewing officials. The guidance provided may not take the form of quantitative limitations on the number of ratings at any given rating level. The agency head or designee must certify that (1) the appraisal process makes meaningful distinctions based on relative performance; (2) results take into account, as appropriate, the agency’s performance; and (3) pay adjustments and awards recognize individual/organizational performance. Senior employee ratings (as well as subordinate employees’ performance expectations and ratings for those with supervisor responsibilities) appropriately reflect employees’ performance expectations, relevant program performance measures, and other relevant factors. Among other provisions, the agency must provide for at least one rating level above Fully Successful (must include an Outstanding level of performance), and in the application of those ratings, make meaningful distinctions among executives based on their relative performance. The agency should be able to demonstrate that the largest pay adjustments, highest pay levels (base and performance awards), or both are provided to its highest performers, and that, overall, the distribution of pay rates in the SES rate range and pay adjustments reflects meaningful distinctions among executives based on their relative performance. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Key practices of effective performance management for the Senior Executive Service (SES) include the linkage or "line of sight" between individual performance and organizational success, the importance of linking pay to individual and organizational performance, and the need to make meaningful distinctions in performance. GAO identified certain principles for executive pay plans that should be considered to attract and retain the quality and quantity of executive leadership necessary to address 21st century challenges, including that they be sensitive to hiring and retention trends; reflect knowledge, skills, and contributions; and be competitive. This testimony focuses on the Department of Veterans Affairs' (VA) process for awarding bonuses to SES members, the amount and percentage of bonuses awarded for fiscal years 2004 through 2006 based on data reported by VA, and the Office of Personnel Management's (OPM) and the Office of Management and Budget's (OMB) roles in certifying federal agencies SES performance appraisal systems. GAO analyzed VA's policies and procedures for awarding bonuses and data provided by VA on the amount and percentages of bonuses and interviewed knowledgeable VA officials. Information on OPM's and OMB's certification process was based on our 2007 report on OPM's capacity to lead and implement reform VA requires that each senior executive have an executive performance plan or contract in place for the appraisal year that reflects measures that balance organization results with customer satisfaction, employee perspectives, and other appropriate measures. VA uses four performance review boards (PRB) to review and make recommendations on SES ratings, awards, and pay adjustments based on these performance plans. VA's Secretary appoints members of three of the four boards on the basis of the position held within the agency, and consideration is given to those positions where the holder would have knowledge about the broadest group of executives. Members of the fourth board are appointed by VA's Inspector General. VA's PRBs vary in size, composition, and number of SES members considered for bonuses, and each PRB, within the scope of VA's policies, develops its own procedures and criteria for making bonus recommendations. According to VA policy, bonuses are generally awarded only to those rated outstanding or excellent and who have demonstrated significant individual and organizational achievements during the appraisal period. According to data reported by OPM, in fiscal year 2005, VA awarded higher bonus amounts to its career SES than any other cabinet-level department; however, according to OPM's data, six other cabinet-level departments awarded bonuses to a higher percentage of their career SES. OPM and OMB evaluate agencies' SES performance appraisal systems against nine certification criteria jointly developed by the two agencies and determine that agencies merit full, provisional, or no certification. VA has been granted provisional certification in each of the years 2004 through 2006. Our review of VA's requirements for SES performance plans as represented in both its 2006 submission and 2007 draft submission to OPM show that VA made changes to the requirements for its performance plans to reflect greater emphasis on measurable results.
Drivers’ licenses have become widely accepted as an identity document because they generally contain identifying information such as the licensee’s name, photograph, physical description, and signature and may include features that make them more difficult to counterfeit or alter. As of 2010, about 210 million drivers were licensed in the United States.Due to the crucial role of the driver’s license as an identity document, individuals may try to fraudulently obtain them for a wide range of purposes. For example, some may try to get a license in someone else’s name to commit financial fraud, such as stealing government benefits, opening bank or credit card accounts, and writing counterfeit checks. Criminals may also obtain multiple licenses under different identities so they can commit criminal acts and, if apprehended, avoid having charges associated with their true identity. Illegal aliens may use a counterfeit license to live in the United States. The prevalence of driver’s license fraud in the United States is difficult to fully determine or quantify. The Federal Bureau of Investigation collects data from all states on a number of different categories of crimes through its Uniform Crime Reporting program, but this program does not have a category specifically for driver’s license fraud. Estimates are, however, available from a few sources. For example, in 2010 the Federal Trade Commission (FTC) reported that complaints involving the issuance or forging of drivers’ licenses accounted for 0.9 percent of the approximately 251,000 identity theft complaints it received overall (about 2,300 complaints). Some evidence suggests, though, that many identity theft cases go unreported and thus the total number of identity theft cases may be substantially higher than the FTC figure. In addition, the Center for Identity Management and Information Protection analyzed 517 identity theft cases investigated by the U.S. Secret Service between 2000 and 2006, and found that counterfeit drivers’ licenses were used in 35 percent of these cases. Verifying license applicants’ identity and preventing fraud has traditionally been a state responsibility, but after the terrorist attacks of September 11, 2001, there was an increased federal interest in driver’s license issuance and security, as evidenced by the passage of the REAL ID Act of 2005. States are not mandated to comply with the Act; however, the Act establishes specific procedures states must follow when issuing drivers’ licenses in order for those licenses to be accepted by federal agencies for “official purposes,” including, but not limited to, boarding commercial aircraft, entering federal buildings, and entering nuclear power plants. As of July 2012, 17 states had enacted laws expressly opposing implementation or prohibiting the relevant state agencies from complying with the REAL ID Act. Under the REAL ID Act, DHS has primary responsibility for establishing how and when states can certify their compliance and determining whether states are compliant. DHS issued regulations in 2008 that provided details on how it would determine whether states were REAL ID-compliant. Although the Act set a May 11, 2008, deadline for compliance, DHS’ regulations allowed states to request an extension of the full compliance deadline to May 11, 2011, and the agency later pushed the date back to January 15, 2013. If states are interested in complying with the Act, they must submit documentation no later than 90 days before this deadline (around October 15, 2012). Initially, after the January deadline, individuals with licenses from states determined to be compliant may continue to use their licenses for official purposes, regardless of when these licenses were issued, according to DHS. However, by December 1, 2014, certain individuals—those born after December 1, 1964—must be issued new, REAL ID-compliant licenses by states that have been determined to be compliant in order to use their licenses for official purposes. By December 1, 2017, all license holders must be issued new, REAL ID-compliant licenses in order to use them for official purposes. The REAL ID Act sets minimum standards for several aspects of the license and identification card issuance process. In the area of identity verification, the Act establishes the following requirements, among others, for states seeking compliance: Documentation: States must require license applicants to provide documentation of their name, date of birth, SSN, address of principal residence, and lawful status in the United States; Verification: Requires states to verify with the issuing agency the issuance, validity, and completeness of the documents presented as proof of name, date of birth, SSN (or verify the applicant’s ineligibility for an SSN), address, and lawful status, with specific requirements to confirm SSNs with SSA and verify lawful status of non-citizens through an electronic DHS system; Image capture: Requires states to capture and store digital images of all documents presented by license applicants to establish identity, such as passports and birth certificates, and capture the facial images of all applicants; Renewals: Requires states to establish an effective procedure for confirming or verifying the information provided by individuals seeking to renew their licenses; One driver, one license: Requires states to refuse to issue a license to an applicant who already holds a license from another state, without confirming that this other license has been or is in the process of being terminated; and Staff training: States must establish training programs on recognizing fraudulent documents for appropriate employees involved in issuing licenses. State driver licensing agencies use a combination of different techniques to verify the identity of license applicants and prevent fraud. These various procedures are used together to detect license fraud and no single technique is sufficient, according to officials at several licensing agencies. All states have in place some procedures to detect counterfeit documents, which may include electronic systems to verify data contained on documents—such as the Social Security card—or visual inspection of documents. Many states also use other techniques to detect fraud, including facial recognition, cross-state checks, or internal controls for licensing transactions. (See table 1.) All states plus the District of Columbia are now using Social Security Online Verification (SSOLV) to verify license applicants’ SSNs and other personal data, consistent with the REAL ID Act’s requirement to confirm SSNs with SSA. The number of states verifying SSNs with SSA has increased substantially since 2003, when we reported that 25 states were Even states with laws opposing implementation of the REAL doing so.ID Act are checking SSNs through SSOLV. Use of SSOLV allows states to verify that the SSN provided by a license applicant is valid. In other words, SSOLV allows states to check whether (1) someone has been issued this SSN, (2) the SSN matches the name and date of birth provided by the applicant, and (3) the SSN is associated with a deceased individual. Officials in most of the states we interviewed said they never or rarely issue a permanent driver’s license before obtaining a verification of the applicant’s personal data. In fiscal year 2011, the SSOLV verification rate—that is, the percentage of SSOLV verification requests that confirmed the validity of the personal data submitted—was 93 percent on average nationwide, and almost all the states had rates above 85 percent. The national average is an increase from the 89 percent average rate in fiscal year 2008, the earliest year for which data were available. Officials in almost all of the states we interviewed said they had no concerns about the percentage of SSOLV queries that failed to verify. The most common reason for non-verifications nationwide in 2011 was that the name presented by the applicant did not match the name associated with the SSN on file with SSA. Officials in most of the states we interviewed cited name changes as the most common reason for this. A license applicant may have changed their name after marriage, but not reported this change to SSA. In such cases, states may ask applicants to resolve the issue with SSA and then return to the licensing agency so the SSOLV query may be run again. Most states are also using Systematic Alien Verification for Entitlements (SAVE), another REAL ID Act requirement, but officials in some of the states we interviewed reported challenges with the system. SAVE, operated by DHS, verifies the information in documents that non-citizen applicants provide to prove they have lawful status in the United States. As of 2012, licensing agencies in 42 states plus the District of Columbia had agreements with DHS to use it.agreements do not use the system consistently for each non-citizen applicant. For example, officials in one state we interviewed said they used SAVE only when the documents submitted by a non-citizen raised questions, such as if they appeared tampered with or indicated a non- citizen no longer has lawful status in the country. Officials in the five However, a few states with such states we interviewed that were not using SAVE most often cited technological challenges, such as difficulties providing front-counter staff in local issuance branches with routine access to the system. Officials in all but one of these states said they plan to start using the system if these issues are resolved. Officials in half of the states we interviewed that were using SAVE said they were concerned about the verification rate they obtained. When states submit data from non-citizens’ lawful status documents, the system searches a variety of DHS databases in an effort to verify these data. If data are not verified on the first attempt, the state may initiate a second and then a third attempt, which entail manual checks by DHS staff and additional costs for the state. Officials in one state, for example, told us when SAVE does not verify lawful status on the first attempt, moving on to additional attempts requires additional efforts by staff. Officials in a few states said they believe data entry errors and delays in updating the data in DHS databases are common reasons that data are not verified; DHS officials also listed these as possible factors. Among all licensing agencies using SAVE, the verification rate for initial SAVE queries during fiscal year 2011 varied from 45 percent to 91 percent, according to DHS data (see fig. 1). DHS officials said it is possible that states that only submit SAVE queries when there is a potential problem with a document have a higher percentage of queries that do not verify on the initial attempt. Another fraud prevention technique is inspection of identity documents by front-counter staff. In all the states we contacted, front-counter staff in local licensing offices inspects identity documents in an effort to detect counterfeits. This step is often done even for documents that are also verified electronically, such as SSN documents. Inspection generally involves the visual or physical examination of documents. Staff told us they look for security features embedded in authentic documents, such as watermarks and proper coloring in Social Security cards or raised seals on birth certificates. They may check if documents are printed on security paper that is used for authentic documents, and also to see if a document appears to have been tampered with. Staff may use a variety of tools to assist with their inspection. For example, they may use black lights and magnifying glasses. For certain types of documents, such as out-of-state drivers’ licenses, they may consult books showing the most current versions of these documents. Staff in about half of the states we interviewed also used document authentication machines designed to detect counterfeit documents such as out-of-state licenses and passports. Officials in one of these states explained that front-counter staff scans certain types of documents into the machines, and the machines indicate if the document is authentic. Finally, training also plays a role in helping staff inspect documents. Officials in all the licensing agencies we contacted said they have provided fraudulent document recognition training to their staff, which is also required by the REAL ID Act; most said that 100 percent of their staff have received such training. Even officials in states we contacted that have laws prohibiting implementation of the REAL ID Act said they have taken this step. Many states are using facial recognition techniques or fingerprinting which, while not required by the REAL ID Act, may detect applicants who attempt to obtain a license under an identity other than their own. According to AAMVA, licensing agencies in 41 states plus the District of Columbia were using facial recognition, fingerprinting, or both techniques as of June 2012. Among the 11 states in our review, 5 routinely used biometric techniques as part of their verification procedures (4 used facial recognition and 1 used fingerprinting) and an additional 4 had plans to implement facial recognition procedures. Licensing agency officials in the remaining 2 states said they are barred by state law from using facial recognition to screen license applicants. State licensing agencies generally perform facial recognition checks only against their own photo databases, according to the National Institute of Standards and Technology (NIST) which conducts studies on facial recognition. an applicant. These checks are not necessarily a purely automated function, and staff may need time to review images that are potential matches to determine if they really are of the same individual. While states’ facial recognition programs are focused on detecting in- state license fraud, states also have some procedures in place that may detect cross-state fraud. As of March 2012, 23 states and the District of Columbia were participating in a photo-sharing program facilitated by AAMVA that is designed to help detect fraud across state lines. This program allows a participating state to obtain the facial image associated with a surrendered license from the issuing state if that state also participates in the program. Through this process, a fraudulent license could be detected if a state query yields either no photo or a photo that does not match the applicant. In addition, state licensing agencies may detect cross-state fraud through other systems. For example, all states plus the District of Columbia participate in the Problem Driver Pointer System (PDPS), to check if license applicants have adverse driving records. actions in other states, PDPS may also help states identify applicants who already have licenses in other states that they have not divulged. Also, a few state licensing agencies said they may use the National Law Enforcement Telecommunications System to check if a license applicant already has a license in another state. This system is generally only available to law enforcement personnel, not to all front-line staff in license issuance offices. Officials in one state told us they use it only in limited circumstances, such as when there is reason to suspect license fraud. The PDPS searches the National Driver Register, a database of state-provided driver information maintained by the Department of Transportation’s National Highway Traffic Safety Administration, and if an individual is identified as having an adverse record, the PDPS “points” to the state where the individual’s record may be obtained. A PDPS check is typically used to detect certain serious traffic violations or license suspensions or revocations associated with an applicant in other states, which might make the applicant ineligible to receive a license in a new state. example, officials in almost all the states we interviewed said front-line staff may not override a non-match in an electronic verification system such as SSOLV in order to issue a license. Additionally, officials in most states told us managers examine licensing transactions to ensure proper procedures were followed. For example, officials in one state told us an audit team checks to make sure all required identity documents were collected as part of the transaction. Other procedures that states employ include monitoring licensing transactions to identify anomalies that may indicate internal fraud, such as the issuance of multiple duplicate licenses to the same individual; rotating staff among different stations so they do not know where they will be working on any particular day; and randomly assigning license applicants to the employee who will serve them, to avoid collusion. Figure 2 illustrates how the various systems and efforts work together to detect and prevent driver’s license fraud in one of the states we reviewed. This state’s process includes a number of the different types of checks we have described. An individual applying for a driver’s license proceeds through two different stations, where several checks and other steps are performed by different employees. Certain steps are performed at one station only, such as taking the applicant’s photo, checking documents with the authentication machine and performing the electronic verifications such as SSOLV. But other steps are performed at both stations, including entering data from identity documents into the licensing agency’s computer system. State officials told us that having two employees enter applicants’ data helps guard against internal corruption, because if one employee tries to collude with an applicant to enter false information, this will be caught by the other employee. If no potential fraud is found through all the checks at the local branch, the applicant is issued a paper license valid for 45 days. The licensing agency performs some additional checks after the issuance of the temporary license, including a facial recognition check against all photos in its database and verification that the applicant’s mailing address is valid. If no concerns are found, the state mails a permanent license to the applicant. Although states are already implementing a number of the identity verification procedures required by the REAL ID Act, some states may not comply with certain provisions for various reasons. For example, officials in one state we interviewed said they are not verifying applicants’ lawful status through SAVE because the state does not require people to have lawful status in the United States to obtain a license. Officials said the state has no plans to enact such a requirement. In other cases, state officials told us they find certain statutory or regulatory requirements to be burdensome or unnecessary. For example, several of the states we interviewed do not verify SSNs through SSOLV when individuals renew their licenses, a step required by DHS’ regulations. State officials told us it is not necessary to take this step because these SSNs were already verified at the time of initial license application; officials in one state also mentioned the cost of a SSOLV query as a reason not to re-verify an SSN. DHS said one reason it is necessary to re-verify personal data such as SSNs for renewals is that such checks can detect cases where the data are actually associated with a deceased individual. Re-verifying SSNs may therefore detect attempts to renew a license fraudulently under the identity of a deceased individual. In addition, several of the states we interviewed do not require applicants to provide a document showing their SSN. Officials in one of these states said the Social Security card is easy to tamper with so it is more valuable to verify the actual SSN than to assess the validity of the paper document. Officials in two states said they do not plan to comply with the deadlines in DHS’ regulations, which will require states, after DHS has determined they are REAL ID-compliant, to process new applications and issue new licenses to certain license holders by 2014 and all license holders by 2017. These officials cited the expected resource burden of processing so many applications in a short period of time. Officials in most of the states we interviewed said their anti-fraud efforts have had success in preventing the use of counterfeit documents to obtain licenses, especially documents associated with completely fictitious identities. Indeed, officials in the majority of our 11 selected states said they have seen a decline in attempts to obtain licenses using counterfeit documents. SSOLV makes it harder for criminals to obtain a license under a fictitious identity. Officials in most of the states we interviewed said SSOLV checks have helped make the use of fraudulent documents more difficult. For example, officials in one state told us that before it started using SSOLV, about 75 percent of license fraud cases involved counterfeit Social Security cards, but the use of counterfeit documents such as these has declined significantly and is no longer the main type of fraud they see. Officials in several states also cited SAVE as having made the use of counterfeit documents more difficult. For example, officials in one state commented that when they started using SAVE in 2002 they identified many forged documents intended to prove lawful status in the United States, but over time they have seen fewer forgeries of these documents. Officials in several states said their efforts to train staff on fraudulent document recognition or to prevent corruption among their staff have also had an impact. Officials in one state said the use of counterfeit Social Security cards has declined partly because front- line staff is better trained on how to check documents for security features. Officials in another state told us internal control procedures, including having two staff separately inspect each document, has made license fraud more difficult to accomplish. Several states provided data indicating a decline in recent years in the number of license fraud investigations involving fraudulent documents. For example, one provided data showing a steady decline in the annual number of investigations based on referrals from front-counter staff, from 156 in 2002 to 36 in 2011. Officials said this trend reflects a decline in the use of counterfeit documents, because such cases are the ones typically detected by front- counter staff. State officials also reported successes in using facial recognition technology to detect license fraud, particularly fraud involving identity theft. SSOLV has made it harder to get a license under a fictitious identity, but it cannot determine whether a valid SSN and other personal information (name and date of birth) submitted by a license applicant are truly associated with that applicant. Some evidence suggests that criminals seeking a license are now more likely to try to obtain one under another real identity—using genuine and sometimes forged identity documents to do so. For example, law enforcement officials in one state said that as the number of fraud cases involving counterfeit documents has declined, they have seen an increasing number of what they called imposter fraud: attempts to steal another person’s complete identity— including name, SSN, and date of birth—and obtain a license under that identity. Officials in several states told us facial recognition plays an important role in preventing such fraud. Officials in one state, for example, said it is their most effective tool for detecting identity theft, and has detected over 100 license fraud cases annually since 2008. Officials in another state told us facial recognition has resulted in about 6,200 investigations and 1,700 arrests since its implementation in 2010. Officials in a few states told us after they introduced facial recognition they detected individuals with a number of licenses under different identities—as many as 10 different identities associated with one individual in one case. However, state officials also said there are some limitations in the ability of facial recognition to detect matches between photos when a person’s appearance has been altered in one photo. For that reason, among others, an official with NIST said facial recognition may be less effective than other biometric techniques such as fingerprinting and iris recognition in detecting matches. Examples of How Facial Recognition Detected License Fraud In one state we visited, licensing agency employees were issuing licenses to individuals using real identities of other people for payments of $7,500 to $12,500 a piece. As part of the scheme, these employees provided their customers with legitimate identity documents belonging to other people, such as Social Security cards and birth certificates. Facial recognition successfully identified that the individuals who had paid for the fraudulent licenses had already received other identification documents from the state and therefore had photos in the state’s database. In another example from the same state, according to state officials, a foreign national who these officials identified as being on the “no-fly” list had obtained licenses under four different identities. This individual had been deported from the United States multiple times, and each time was able to re-enter the country under a different identity. Using facial recognition software, the state was able to detect him by comparing the photos associated with the different licenses. States’ vulnerability to license fraud perpetrated by individuals who cross state lines has been a longstanding issue, and it remains a challenge for states despite the success officials report in detecting other kinds of Officials in the majority of states we interviewed told us their fraud.states bar their license holders from also holding licenses in other states, and the REAL ID Act also prohibits states from issuing a license to an applicant who already has one in another state. However, individuals may try to obtain licenses in multiple states. For example, criminals may try to get licenses under different identities by using the identity of someone who resides—and may have a license—in one state to obtain a license under that identity in a different state, perhaps to commit financial fraud under their stolen identity. Officials in all the states we interviewed acknowledged they lack the ability to consistently determine if the identity presented by a license applicant is already associated with a license- holder in another state. Some existing verification systems may accomplish this goal in limited circumstances but do not fully address the gap. For example, officials in a number of states told us a check against the problem driver database (Problem Driver Pointer System) will not detect a license in another state if it is not associated with any driving violation. Moreover, the national law enforcement data exchange system (National Law Enforcement Telecommunications System) is cumbersome to use if a state does not know which state an applicant may already have a license in, and in any case is generally only available to law enforcement personnel. Similarly, the AAMVA photo sharing program cannot be used to detect fraud if an applicant does not present an out-of- state license to be verified, and not all states participate. Finally, facial recognition programs generally only check a state’s own internal photo database. They cannot detect cases when a criminal tries to obtain a license under the identity of someone else if neither of them have a license in the state. Example of Cross-State Fraud In one state an individual obtained an identification card under the identity of a person residing in another state by successfully using identity documents belonging to that person. The identity thief used this identification card as authorization to work. The crime was only discovered when the victim filed an identity theft complaint with the state in which the criminal obtained the fraudulent identification card and had been working. States are trying to develop additional mechanisms for addressing cross- state license fraud, but none are fully operational yet. For example, a consortium of five states is developing a state-to-state verification system that would enable states to check if a license applicant’s identity— including name, date of birth, and a portion of the SSN—is already associated with a license in other states. Officials in almost all the states we interviewed said such a system would be useful. Officials in several states said it could detect criminals who try to use the identity of someone in one state to obtain a license in another state—provided the identity theft victim is a license holder in the first state. However, officials in a number of states said there would be challenges to implementing such a system, primarily related to cost and ensuring the security of personal data. The state consortium expects to complete its design work by 2013 and implement a pilot by 2015, but said it may not be until 2023 that the states have entered data on all their license holders into the system. Beyond the planned state-to-state system, some states have considered other approaches to addressing cross-state license fraud. Officials in several states told us cross-state facial recognition, in which states run checks against neighboring states’ photo databases, could be a helpful tool. However, they cited obstacles, including the much larger number of potential matches that staff would have to examine, the technological incompatibility of different states’ facial recognition programs, and privacy concerns. In addition, officials in one state suggested that it would be helpful if SSA informed states when an SSN submitted to SSOLV for verification had already been submitted previously, because this would alert states that someone might be fraudulently applying for licenses in multiple states. SSA officials said developing this capacity could slow down SSOLV response times and raise privacy concerns because SSA would need to store SSNs submitted by states. While AAMVA’s photo sharing program can play a role in detecting certain kinds of cross-state fraud, the program’s usefulness is limited because fewer than half the states currently participate, and AAMVA told us it lacks the resources to promote use of this program among additional states. Even with the progress reported in preventing the use of certain types of counterfeit documents, the use of forged or improperly obtained birth certificates remains a challenge that leaves states vulnerable to license fraud. Officials in about half the states we interviewed said it can be challenging to detect counterfeit birth certificates. Officials in several states told us the wide variety of formats in which birth certificates are issued across the country makes detecting counterfeits difficult. According to NAPHSIS, there are thousands of different versions of birth certificates because formats vary over time and among issuing agencies.often sees birth certificates from another state, where security features on birth certificates vary from county to county, and it can be difficult to keep track of all the variations. Besides forging birth certificates, criminals may improperly obtain someone else’s genuine birth certificate. For example, criminals may steal or purchase the documents and use them as part of packages of identity documents to obtain licenses fraudulently. In other cases criminals may be able to obtain another person’s birth certificate directly from a vital records agency. According to NAPHSIS, 15 states have virtually no restrictions on who may obtain a birth certificate from a vital records agency. Even in states that restrict access, there may be limited safeguards to ensure birth certificates are only provided to those who have a legitimate right to them. For example, officials at one vital record agency acknowledged that local staff who issue birth certificates do not receive fraudulent document recognition training, and a criminal with a sophisticated fake identification document such as a driver’s license could use it to obtain someone else’s birth certificate. Example of Using A Forged Birth Certificate to Obtain a License Fraudulently In one state an individual obtained a victim’s personal data and applied for a license using a counterfeit birth certificate and counterfeit SSN documentation. This fraud was only discovered when the alleged criminal fled an accident involving a plane carrying narcotics, and checked into a nearby motel under the false identity. A system exists that could help address the issue of counterfeit birth certificates and meet the REAL ID requirement to verify birth certificates with the issuing agency when they are submitted by license applicants, but state officials said there are challenges with using it and no states are currently doing so for this purpose. The Electronic Verification of Vital Events (EVVE) system is designed to verify the accuracy of data on birth certificates including name, date of birth, and either the date the certificate was filed or the file number. It is operated by NAPHSIS, the organization that represents the nation’s vital records agencies. As of February 2012, 43 of 57 vital records agencies were participating in the system, meaning at least some of their birth records could be electronically verified. Licensing agencies in three states participated in a multi-year pilot ending in 2011, in which they used the system to verify birth certificates for license applicants. Officials in one of the pilot states told us that, based on their experience, EVVE has the potential to help detect counterfeit birth certificates, because while criminals may be able to obtain another person’s name and date of birth and create a counterfeit birth certificate, it is more difficult to obtain the correct file date on the victim’s birth certificate—which must match to pass an EVVE check. However, officials in this state cited challenges they experienced during the pilot, including confusion about which date on the birth certificate is the file date that should be entered for verification and gaps in the state’s vital records data which make verification difficult for birth certificates filed during certain periods of time. Similarly, officials in most of the states we interviewed that had not participated in the pilot said it could be helpful to use EVVE, but also expressed concerns about the cost of using the system, the fact that not all vital records agencies are participating, and the completeness or accuracy of the vital records data that are already available for verification through it. As an interim solution, some licensing agencies are working towards verifying at least their own states’ birth records electronically. Officials in several of the licensing agencies we interviewed have considered or are planning to start working with the vital records agencies in their states to electronically verify birth records for license applicants born in-state. Officials in one licensing agency told us they have discussed this approach with the vital records agency in their state, and see it as an interim step before EVVE is more viable. Licensing agencies face obstacles in these efforts, though, such as birth records not being fully in electronic format or the reluctance on the part of vital records agencies to participate in electronic birth record verification arrangements. Our investigative staff exploited the vulnerabilities discussed above to fraudulently obtain drivers’ licenses in the three states where we made such attempts. In each state, investigative staff obtained genuine licenses under fictitious identities—combinations of name, date of birth, and SSN that do not correspond to any real individuals. In two states, a staff member obtained two licenses under two different identities. In each attempt, staff visited local license issuance branches and submitted various counterfeit documents to establish their identities, depending on state requirements. In all cases they submitted a counterfeit driver’s license and a counterfeit birth certificate, both purportedly from other states. (See figure 3 below for examples of the counterfeit birth certificates used.) In some cases, staff also submitted other documents including fake Social Security cards and fake pay stubs. In most of these five attempts across the three states, we were issued permanent or temporary licenses in about 1 hour or less. In only one case did a front- counter clerk appear to question the validity of one of the counterfeit documents, but this clerk did not stop the issuance process. All three states check the validity of applicants’ personal data—including SSN— through SSOLV, but assuming SSOLV checks were performed, they were not sufficient to detect these fraud attempts because the SSNs were valid. These successful fraud attempts demonstrate several vulnerabilities in states’ defenses against license fraud. First, the fact that we were able to use counterfeit out-of-state licenses in each attempt further confirms states’ inability to consistently check if applicants’ identities are already associated with licenses in other states. If the envisioned state-to-state verification system were in place, then the states where we applied might have discovered that the out-of-state licenses we submitted were fakes and did not actually exist. Under current plans, this system would verify the validity of licenses submitted from other states as well as check if applicants have licenses from other states that they have not divulged. Even in the absence of the state-to-state system, if all states participated in AAMVA’s photo sharing program, then the counterfeit out-of-state licenses might have been detected through a request to the purported state of origin of the license to validate it. We specifically selected states for our undercover work that are not among the 23 participating in this program. But as long as any states are not participating, criminals could present counterfeit licenses from these states, and even participating states would be vulnerable. The second vulnerability relates to birth certificates. We were likely able to use counterfeit birth certificates containing fictitious information because no state licensing agencies are verifying birth certificate data through EVVE. None of the front-line clerks in the offices where we applied for licenses questioned the validity of the counterfeit birth certificates presented. Finally, the third vulnerability is that some states are still not using facial recognition or other biometric techniques to detect identity theft. The fact that the two states where our staff applied for multiple licenses are among the nine states that do not use facial recognition technology or other biometric techniques most likely made it easier in these states for our staff to obtain two licenses under two different identities. Facial recognition checks may be able to detect multiple licenses associated with the same individual. While it might still be possible for a criminal to obtain a license fraudulently even if all states utilized a state-to-state verification system, EVVE, and facial recognition, use of these systems would likely have increased the chances of detecting our fraudulent license applications. SSA has taken actions that enhance licensing agencies’ ability to verify SSNs and other personal data. Specifically, the agency has addressed two areas of concern that we raised in 2003. First, to enhance the level of service provided to states, SSA established performance goals, including hours when SSOLV is to be available and response times. Second, to address a vulnerability that might leave states open to license fraud by criminals stealing the identity of a deceased individual, SSA now automatically checks all inquiries against its death records. DHS has similarly taken several actions to improve the usefulness of SAVE. To improve data accuracy and verification rates, the agency monitors verification rates in order to identify problems with data accuracy in the databases SAVE accesses, which contribute to unsuccessful initial verification attempts. Officials in several of the states we interviewed acknowledged that DHS has been taking steps to improve verification rates, the timeliness of query responses, and the accuracy of underlying data, and these officials reported improvements in these areas. In addition, DHS has recently developed a new portal for accessing and using SAVE. Known as the Verification of Lawful Status (VLS) system, it will be accessible through AAMVA’s electronic hub for accessing other verification systems such as SSOLV. DHS is pilot testing VLS, with roll out to all states planned by the end of fiscal year 2012. Officials in several states told us they believe this new approach will make it easier for them to use SAVE more consistently or extensively. Officials in two states, in fact, said the deployment of VLS would enable them to start using the system in the future. DHS has conducted webinars for licensing agency staff on using SAVE, and it is developing online training modules that DHS officials say will also include instruction on interpreting verification results. Beyond these efforts to improve existing verification systems, DHS has provided financial assistance to support states’ efforts to develop new verification systems that could be used to comply with the REAL ID Act. Section 204 of the REAL ID Act authorizes grants to assist states in conforming to the minimum standards in the Act. DHS has awarded about $63 million through various grants since 2008 for upgrading the communications and verification systems infrastructure including development of the state-to-state system and pilot testing other systems. All of these funds were awarded to a group of five states that were part of a consortium that was formed for this purpose. At least half of the $63 million is being used by the consortium for the development and implementation of the state-to-state system, and these funds are available through fiscal year 2016. DHS officials said these funds are designed in part to induce states to start using the system, but in the longer term, the agency expects states to pay for the operation of the system. Besides funding support, DHS has also provided technical advice to help the states understand the federal requirements the system must meet. In addition to the funds set aside for the state-to-state system, some of the grants were also used to support a pilot project in which licensing agencies verified birth certificates through EVVE. However, based on a recommendation from the consortium, DHS does not plan to provide any additional financial support to state licensing agencies for further pilot testing of EVVE because of high transaction costs charged by state vital records agencies. DHS officials are also concerned about inaccuracies in electronic birth records that may lead to non-verifications, and the fact that EVVE checks may still be evaded by people who obtain someone else’s birth certificate in one of the states where birth records are accessible to the general public. DHS has also provided grants to individual states to help them improve their driver’s license security procedures, including identity verification. The Driver’s License Security Grant Program (DLSGP) provided over $200 million in grants to states and territories from fiscal years 2008 to 2011. All but 2 states applied for and received grant funds during this time period (see fig. 4). Initially established as the REAL ID Demonstration Grant program, the DLSGP provides assistance to states for improving the security and integrity of their drivers’ licenses in ways that are consistent with the requirements of the REAL ID Act. Both states with and without laws opposing implementation of the REAL ID Act were eligible to apply for and received these grants. Funds could be used for planning activities, equipment purchases, equipment maintenance and repair, and related costs. Officials in states we interviewed said they used their funds for efforts such as installing or updating facial recognition systems, providing staff anti-fraud training and information sharing, and in one case supporting efforts by the state’s vital records agency to digitize its birth records. DHS has conducted other activities that may help combat license fraud generally, and officials in several of the states we interviewed told us they were participating in these efforts. However, these efforts are broad in nature and are not specifically designed to support compliance with the REAL ID Act. For example, DHS operates a forensic laboratory that investigates the use of counterfeit identity documents and is a resource available to driver licensing agencies. DHS also leads task forces involving federal, state, and local law enforcement agencies that are designed to combat identity document fraud, including driver’s license fraud. These task forces seek to pursue criminal prosecutions and financial seizures. Despite the approaching January 2013 deadline for compliance, DHS has not provided timely, comprehensive, or proactive guidance on how states seeking REAL ID compliance could meet the identity verification requirements. For example, DHS did not issue written guidance on how to meet specific REAL ID Act identity verification requirements for over 4 years after it issued its final regulations in 2008. Officials in most of the states we interviewed expressed a need for additional guidance on how they could meet the identity verification requirements of the REAL ID Act and DHS regulations. Because of this lack of guidance, officials in these states said they are uncertain as to whether they will be in compliance with various provisions when the law goes into effect, and some are concerned about investing resources in particular steps only to find out afterwards that they are not in compliance. For example, officials in one state said there was a lack of clarity about whether military identification cards may be used to prove identity under the REAL ID Act. In the absence of formal written guidance, they have often had to make assumptions based in part on DHS officials’ informal remarks. The guidance DHS has provided regarding verification of license applicants’ identities has generally been ad hoc or in response to state requests. For example, DHS provided additional information to states in June 2012 in the form of answers to frequently asked questions posted to Moreover, agency officials previously indicated that DHS its website. was planning to issue a comprehensive guidance document specifying See http://www.dhs.gov/files/programs/secure-drivers-licenses.shtm. actions states could take to meet REAL ID Act requirements and what states should cover in the certification plans they submit to DHS for approval. However, DHS officials said they are now reevaluating that decision. Agency officials said DHS will continue to provide additional guidance as needed through the frequently asked questions web page, presentations at conferences of state licensing agency officials, or responses to specific questions from individual states. However, state officials reported mixed experiences with how DHS has responded to their specific questions. On the one hand, officials in several states said DHS has been responsive to questions they had on meeting particular REAL ID Act requirements. For example, officials in one state said DHS has for the most part responded relatively quickly to e-mail inquiries. On the other hand, however, officials in some states cited instances in which DHS has not responded promptly, or at all, to their questions. For example, officials in one state told us they had not received a response to a question they first asked DHS in 2009 about whether enhanced drivers’ licenses would meet REAL ID Act requirements. Officials in another state told us it took longer than they expected for DHS to respond to a question about whether refugees and asylum seekers should be treated as permanent U.S. residents when they apply for a license. DHS guidance is especially critical for two key REAL ID Act requirements—not issuing licenses to persons who already have them from other states and verifying birth certificates—given that the electronic verification systems designed for those purposes will not be fully operational for years, and the approaching deadline for states to submit their compliance plans. DHS regulations require states to use electronic verification systems as they become available, but also authorize states to use alternative methods approved by DHS.not providing comprehensive guidance specifying what alternative procedures would be acceptable for compliance with these requirements, DHS officials also indicated they have no plans to promote certain strategies they consider potentially useful that might partially help states meet these requirements, such as: (1) expansion of the AAMVA photo However, in addition to sharing program to additional states, and (2) expansion of licensing agencies’ efforts to verify birth certificates through their own states’ birth records for applicants born in-state. Instead, DHS plans to consider alternatives states propose in their compliance plans. DHS officials said they believe this approach gives states opportunities to develop innovative solutions and flexibility to consider their own circumstances. However, officials in some states we interviewed expressed a need for direction from DHS to help identify possible alternatives. Officials in one state we interviewed, for example, wanted assistance in identifying what procedures could be followed to meet these requirements until the state- to-state verification system and EVVE are fully operational. Officials in another state told us that in their view, DHS would need to provide additional options for meeting these requirements in order for DHS to determine states are compliant by January 2013. Since the terrorist attacks of September 11, 2001, states have largely closed off certain approaches that identity thieves and terrorists have used to fraudulently obtain drivers’ licenses, and federal actions have contributed to this progress by enhancing verification systems or by providing financial support to help states develop new systems. But, as our investigative work demonstrates, it is still possible to exploit several remaining vulnerabilities in states’ identity verification procedures to fraudulently obtain genuine drivers’ licenses, contrary to the purpose of the REAL ID Act. DHS has provided some guidance about certain aspects of REAL ID implementation primarily in response to state questions. However, the lack of proactive guidance by DHS on interim solutions for certain REAL ID Act requirements has hampered states’ ability to fully address these gaps. For example, even though the state-to- state system is still years from fruition, there are opportunities before the system’s expected completion date of 2023 for states to at least partially address the REAL ID requirement to prevent people from getting multiple licenses from different states—and thereby close off certain paths to cross-state fraud. But without guidance and encouragement from DHS, states and other agencies may be less likely to coordinate in pursuit of these opportunities. Similarly, even though EVVE is not yet fully operational, states can still make it harder for criminals to use forged birth certificates by, for example, checking their own birth records for license applicants born in-state. However, without leadership from DHS, states and other agencies may be less likely to coordinate in pursuit of these opportunities or see the value in taking action. Additionally, in the absence of an effective DHS strategy to help states address these REAL ID Act requirements and high-risk vulnerabilities while the national systems are being developed, states may elect not to comply with the Act, may invest in ad hoc or stopgap measures that are not sufficient for compliance, or most importantly, may be ill-equipped to adequately combat this type of fraud. To enhance state driver licensing agencies’ ability to combat driver’s license fraud, consistent with the requirements of the REAL ID Act, we recommend that the Secretary of Homeland Security take the following interim actions while national systems to detect cross-state and birth certificate fraud are being developed: 1. Work with state, federal and other partners to develop and implement an interim strategy for addressing cross-state license fraud. Such a strategy could include, for example, expansion of AAMVA’s photo sharing program or enhanced utilization of SSOLV to identify SSNs that are queried multiple times by different states. This strategy should include plans for sharing best practices and ideas for alternative solutions among the states. 2. Work with states and other partners to develop and implement an interim strategy for addressing birth certificate fraud. Such a strategy could include, for example, coordination between driver licensing agencies and state vital records agencies to verify birth certificates for license applicants born in-state. We provided a draft of this report to DHS and SSA for review and comment. In its written comments (see app. I), DHS did not concur with either of our recommendations, saying that interim strategies for addressing cross-state and birth certificate fraud are not needed. The agency said it has informed states that existing systems and procedures may address these issues and meet regulatory requirements, and has provided grant funds to help states develop their own new solutions. DHS emphasized that states need the flexibility to adopt solutions that best fit their individual circumstances. We acknowledge in our report that DHS has supported states’ efforts to address cross-state and birth certificate fraud in the ways it outlines in its comments. However, we continue to believe that DHS needs to assume a more proactive role in these areas— and that it is possible to do so without being overly prescriptive. State driver licensing agencies remain vulnerable to cross-state and birth certificate fraud. Existing systems and methods are not sufficient to address the vulnerabilities, as our undercover work demonstrates. Given the anticipated date (2023) for full implementation of the state-to-state system, and the continuing issues with driver licensing agencies’ use of the EVVE system, states need new interim solutions and alternatives now. And, officials in many of the states we contacted still said they are confused about how to comply with certain REAL ID provisions, such as those related to cross-state and birth certificate fraud, despite DHS’ efforts to provide information through conferences and responses to individual state questions. A formal strategy for addressing these vulnerabilities in the short term that is made available to all states in a consistent manner would better enable states to learn about and implement new options. Furthermore while DHS notes in its comments that HHS has a statutory responsibility for setting minimum standards for birth certificates, DHS involvement in this area is also critical because establishing date of birth is a central part of the driver’s license application process. DHS also provided technical comments, which we incorporated as appropriate. In its written comments (see app. II), SSA asked that we remove from our first recommendation the reference to enhanced utilization of SSOLV as one option for detecting cross-state fraud. As SSA notes, we do acknowledge in our report that there may be challenges with such use of SSOLV. Accordingly, our recommendation does not direct DHS and SSA to proceed with modifying SSOLV. It directs DHS to consider, in consultation with relevant partners, the enhanced use of SSOLV as one of a range of options for addressing cross-state fraud. We expect that DHS and SSA would more thoroughly evaluate the potential benefits and challenges of using SSOLV for this purpose and jointly determine whether to include this option in an overall strategy for combating cross-state fraud. Consequently, we made no change in response to this comment. SSA also provided technical comments which we incorporated as appropriate. We are sending copies of this report to the relevant congressional committees, the Secretary of Homeland Security, the Commissioner of Social Security, and other interested parties. This report is also available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made key contributions to this report are listed in appendix III. In addition to the contact named above, Lori Rectanus, Assistant Director; Lorin Obler; Joel Marus; Susannah Compton; John Cooney, Jr.; Sarah Cornetto; Keira Dembowski; Holly Dye; Robert Graves; Dana Hopings; Kristy Kennedy; Otis Martin; Mimi Nguyen; George Ogilvie; Almeta Spencer; and Walter Vance made key contributions to this report. State Department: Undercover Tests Show Passport Issuance Process Remains Vulnerable to Fraud. GAO-10-922T. Washington, D.C.: July 29, 2010. Identity Theft: Governments Have Acted to Protect Personally Identifiable Information, but Vulnerabilities Remain. GAO-09-759T. Washington, D.C.: June 17, 2009. Social Security Numbers Are Widely Available in Bulk and Online Records, but Changes to Enhance Security Are Occurring. GAO-08-1009R. Washington, D.C.: September 19, 2008. Social Security Numbers: Use Is Widespread and Protection Could Be Improved. GAO-07-1023T. Washington, D.C.: June 21, 2007. Social Security Numbers: Federal Actions Could Further Decrease Availability in Public Records, though Other Vulnerabilities Remain. GAO-07-752. Washington, D.C.: June 15, 2007. Personal Information: Data Breaches Are Frequent, but Evidence of Resulting Identity Theft Is Limited; However, the Full Extent Is Unknown. GAO-07-737. Washington, D.C.: June 4, 2007. Personal Information: Key Federal Privacy Laws Do Not Require Information Resellers to Safeguard All Sensitive Data. GAO-06-674. Washington, D.C.: June 26, 2006. Social Security Numbers: Internet Resellers Provide Few Full SSNs, but Congress Should Consider Enacting Standards for Truncating SSNs. GAO-06-495. Washington, D.C.: May 17, 2006. Social Security Numbers: More Could Be Done to Protect SSNs. GAO-06-586T. Washington, D.C.: March 30, 2006. Identity Theft: Some Outreach Efforts to Promote Awareness of New Consumer Rights Are Under Way. GAO-05-710. Washington, D.C.: June 30, 2005. Social Security Numbers: Governments Could Do More to Reduce Display in Public Records and on Identity Cards. GAO-05-59. Washington, D.C.: November 9, 2004. Social Security Numbers: Improved SSN Verification and Exchange of States’ Driver Records Would Enhance Identity Verification. GAO-03-920. Washington, D.C.: September 15, 2003.
Obtaining a driver's license under another's identity can enable criminals to commit various crimes. The 9/11 terrorists, for example, possessed fraudulent licenses. The REAL ID Act sets minimum standards for states when verifying license applicants' identity, which go into effect in January 2013. If states do not meet these requirements, their licenses will not be accepted for official purposes such as boarding commercial aircraft. DHS is responsible for establishing how states may certify compliance and for determining compliance. SSA helps states verify SSNs. GAO was asked to examine (1) states' identity verification procedures for license applicants, (2) the procedures' effectiveness in addressing fraud, and (3) how federal agencies have helped states enhance procedures. GAO analyzed DHS and SSA data on states' use of verification systems; interviewed officials from DHS, SSA, and other organizations; and conducted on-site or phone interviews with licensing agency officials in 11 states. GAO tested state procedures in three states that have known vulnerabilities; results from these states are not generalizable. To verify license applicants' identity, all 50 states and the District of Columbia have procedures that may detect counterfeit documents. For example, all states are now verifying key personal information, such as Social Security numbers (SSN) through online queries to a Social Security Administration (SSA) database, a significant increase from about a decade ago. This effort helps ensure that the identity information presented belongs to a valid identity and also is not associated with a deceased person. Additionally, most states verify non-citizen applicants' immigration documents with the Department of Homeland Security (DHS) to ensure these individuals have lawful status in the United States. Many states are also using facial recognition techniques to better detect attempts to obtain a license under another's identity. While most states have taken steps required by the REAL ID Act of 2005 (Act), officials in some states indicated that they may not comply with certain provisions--such as re-verifying SSNs for license renewals--because of state laws or concerns that these requirements are unnecessary and burdensome. State officials interviewed by GAO report that identity verification procedures have been effective at combating certain kinds of fraud, but vulnerabilities remain. Officials in most of the 11 states GAO contacted reported a decline in the use of counterfeit identity documents, and officials in states using facial recognition said they detected a number of identity theft attempts. However, criminals can still steal the identity of someone in one state and use it to get a license in another because states lack the capacity to consistently detect such cross-state fraud. A system for addressing such fraud would enable states to comply with the Act's prohibition against issuing licenses to individuals who already have a license from another state, but may not be fully operational until 2023. Furthermore, officials in many states said they have difficulties detecting forged birth certificates. Verifying date of birth is also required by the Act, and a system exists for doing so, but no licensing agencies are using it because of concerns about incomplete data, among other reasons. Partly because these two systems are not fully operational, GAO investigators were able to use counterfeit out-of-state drivers' licenses and birth certificates to fraudulently obtain licenses in three states. By improving their respective verification systems, SSA and DHS have helped states enhance their identity verification procedures. For example, SSA has established timeliness goals for responding to state SSN queries and DHS has addressed data accuracy issues. DHS has also provided funding for states to develop new systems. However, DHS has not always provided timely, comprehensive, or proactive guidance to help states implement provisions of the Act related to identity verification. For example, DHS did not issue formal, written guidance in this area for more than 4 years after issuing final regulations, even though officials from most states GAO interviewed said they needed such guidance. Additionally, even though relevant national systems are not yet fully operational, DHS has no plans to promote certain alternatives states can use to comply with the Act's identity verification requirements and combat cross-state and birth certificate fraud. Officials in some states indicated they needed direction from DHS in this area. GAO recommends that DHS work with partners to take interim actions to help states address cross-state and birth certificate fraud. DHS did not concur with these recommendations, saying its ongoing efforts are sufficient. GAO has demonstrated that vulnerabilities remain as long as national systems are not yet fully operational. Therefore, GAO continues to believe additional DHS actions are needed.
Oversight of nursing homes is a shared federal-state responsibility. As part of this responsibility, CMS (1) sets federal quality standards, (2) establishes state responsibilities for ensuring federal quality standards are met, (3) issues guidance on determining compliance with these standards, and (4) performs oversight of state survey activities. It communicates these federal standards and state responsibilities in the State Operations Manual (SOM) and through special communications such as program memorandums and survey and certification letters. CMS provides less guidance on how states should manage the administration of their survey programs. CMS uses staff in its 10 regional offices to oversee states’ performance on surveys that ensure that facilities participating in Medicare and Medicaid provide high-quality care in a safe environment. Yet, the persistent understatement of serious nursing home deficiencies that we have reported and survey quality weaknesses that we and the HHS Office of Inspector General identified serve as indicators of weaknesses in the federal, state, or shared components of oversight. Every nursing home receiving Medicare or Medicaid payment must undergo a standard state survey not less than once every 15 months, and the statewide average interval for these surveys must not exceed 12 months. During a standard survey, teams of state surveyors—generally consisting of registered nurses, social workers, dieticians, or other specialists—evaluate compliance with federal quality standards. The survey team determines whether the care and services provided meet the assessed needs of the residents and measure resident outcomes, such as the incidence of preventable pressure sores, weight loss, and accidents. In contrast to a standard survey, a complaint investigation generally focuses on a specific allegation regarding a resident’s care or safety and provides an opportunity for state surveyors to intervene promptly if problems arise between standard surveys. Surveyors assess facilities using federal nursing home quality standards that focus on the delivery of care, resident outcomes, and facility conditions. These standards total approximately 200 and are grouped into 15 categories, such as Quality of Life, Resident Assessment, Quality of Care, and Administration. For example, there are 23 standards (known as F-tags) within the Quality of Care category ranging from the prevention of pressure sore development (F-314) to keeping the resident environment as free of accident hazards (F-323) as is possible. Surveyors categorize deficient practices identified on standard surveys and complaint investigations—facilities’ failures to meet federal standards—according to scope (i.e., the number of residents potentially or actually affected) and severity (i.e., the degree of relative harm involved)—using a scope and severity grid (see table 1). Homes with deficiencies at the A through C levels are considered to be in substantial compliance, while those with deficiencies at the D through L levels are considered out of compliance. Throughout this report, we refer to deficiencies at the actual harm and immediate jeopardy levels—G through L—as serious deficiencies. CMS guidance requires state survey teams to revisit a home to verify that serious deficiencies have actually been corrected. In addition, when serious deficiencies are identified, sanctions can be imposed to encourage facilities to correct the deficiencies and enforce federal quality standards. Sanctions include fines known as civil money penalties, denial of payment for new Medicare or Medicaid admissions, or termination from the Medicare and Medicaid programs. For example, facilities that receive at least one G through L level deficiency on successive standard surveys or complaint investigations must be referred for immediate sanctions. Facilities may appeal cited deficiencies and if the appeal is successful, the severity of the sanction could be reduced or the sanction could be rescinded. Facilities have several avenues of appeal, including informal dispute resolution (IDR) at the state survey agency level. The IDR gives providers one opportunity to informally refute cited deficiencies after any survey. While CMS requires that states have an IDR policy in place, it does not specify how IDR processes should be structured. To conduct nursing home surveys, CMS has traditionally used a methodology that requires surveyors to select a sample of residents and (1) review data derived from the residents’ assessments and medical records; (2) interview nursing home staff, residents, and family members; and (3) observe care provided to residents during the course of the survey. When conducting a survey, surveyors have discretion in: selecting a sample of residents to evaluate; allocating survey time and emphasis within a framework prescribed by CMS; investigating potentially deficient practices observed during the survey; and determining what evidence is needed to identify a deficient practice. CMS has developed detailed investigative protocols to assist state survey agencies in determining whether nursing homes are in compliance with federal quality standards. These protocols are intended to ensure the thoroughness and consistency of state surveys and complaint investigations. In 1998, CMS awarded a contract to revise the survey methodology. The new Quality Indicator Survey (QIS) was developed to improve the consistency and efficiency of state surveys and provide a more reliable assessment of quality. The QIS uses an expanded sample of residents and structured interviews with residents and family members in a two-stage process. Surveyors are guided through the QIS process using customized software on tablet personal computers. In stage 1, a large resident sample is drawn and relevant data from on- and off-site sources is analyzed to develop a set of quality-of-care indicators, which will be compared to national benchmarks. Stage 2 systematically investigates potential quality-of-care concerns identified in stage 1. Because of delays in implementing the QIS, we recommended in 2003 that CMS finalize the development, testing, and implementation of a more rigorous survey methodology, including investigative protocols that provide guidance to surveyors in documenting deficiencies at the appropriate scope and severity level. CMS concluded a five-state demonstration process of the QIS in 2007 and is currently expanding the implementation of the QIS. As of 2008, only Connecticut had implemented the QIS statewide, and CMS projected that the QIS would not be fully implemented in every state until 2014. States are largely responsible for the administration of the survey program. State survey agencies administer and have discretion over many survey activities and policies, including hiring and retaining a surveyor workforce, training surveyors, conducting supervisory reviews of surveys, and other activities. Hiring and Retaining a Surveyor Workforce: State survey agencies hire the staff to conduct surveys of nursing homes and determine the salaries of these personnel according to the workforce practices and restrictions of the state. Salaries, particularly surveyor salaries, are the most significant cost component of state survey activities, which are supported through a combination of Medicare, Medicaid, and non-Medicaid state funds. CMS has some requirements for the make-up of nursing home survey teams, including the involvement of at least one registered nurse (RN) in each nursing home survey. In February 2009, we reported that officials from the Association of Health Facility Survey Agencies (AHFSA) and other state officials told us they have had difficulty recruiting and retaining the survey workforce for several years. In our report, we recommended that CMS undertake a broad-based reexamination to ensure, among other aspects, an adequate survey workforce with sufficient compensation to attract and retain qualified staff. Training: States are responsible for training new surveyors through participating in actual surveys under direct supervision. Within their first year of employment, surveyors must complete two CMS online training courses—the Basic Health Facility Surveyor Course and Principles of Documentation—and a week-long CMS-led Basic Long-Term Care Health Facility Surveyor Training Course; at the conclusion of the course surveyors must pass the Surveyor Minimum Qualifications Test (SMQT) to survey independently. In addition, state survey agencies are required to have their own programs for staff development that respond to the need for continuing development and education of both new and experienced employees. Such staff development programs must include training for surveyors on all regulatory requirements and the skills necessary to conduct surveys. To assist in continuing education, CMS develops a limited number of courses for ongoing training and provides other training materials. Supervisory Reviews: States may design a supervisory review process for deficiencies cited during surveys, although CMS does not require them to do so. In July 2003, we recommended that CMS require states to have a minimum quality-assurance process that includes a review of a sample of survey reports below the level of actual harm to assess the appropriateness of scope and severity levels cited and help reduce instances of understated quality-of-care problems. CMS did not implement this recommendation. State Agency Practices and Policies: State survey agencies’ practices, including those on citing deficiencies and addressing pressure from the industry or others, are largely left to the discretion of state survey agencies. In the past, we reported that in one state, CMS officials had found surveyors were not citing all deficiencies. If a state agency fails to cite all deficiencies associated with noncompliance, nursing home deficiencies are understated on the survey record. CMS can identify or monitor states for systematic noncitation practices through reviews of citation patterns, informal feedback from state surveyors, state performance reviews, and federal monitoring surveys (discussed below). CMS also gives states latitude in defining their IDR process. Federal law requires federal surveyors to conduct federal monitoring surveys in at least 5 percent of state-surveyed Medicare and Medicaid nursing homes in each state each year. CMS indicates it meets the statutory requirement by conducting a mix of on-site reviews: comparative and observational surveys. Comparative surveys. A federal survey team conducts an independent survey of a home recently surveyed by a state survey agency in order to compare and contrast its findings with those of the state survey team. This comparison takes place after completion of the federal survey. When federal surveyors identify a deficiency not cited by state surveyors, they assess whether the deficiency existed at the time of the state survey and should have been cited. This assessment is critical in determining whether understatement occurred, because some deficiencies cited by federal surveyors may not have existed at the time of the state survey. Our May 2008 report stated that comparative surveys found problems at the most serious levels of noncompliance—the actual harm and immediate jeopardy levels (G through L). About 15 percent of federal comparative surveys nationwide identified at least one deficiency at the G through L level that state surveyors failed to cite. While this proportion is small, CMS maintains that any missed serious deficiencies are unacceptable. Further, state surveys with understated deficiencies may allow the surveyed facilities to escape sanctions intended to discourage repeated noncompliance. In our May 2008 report we found that for nine states federal surveyors identified missed serious deficiencies in 25 percent or more comparative surveys for fiscal years 2002 through 2007; we defined these states as high- understatement states (see fig. 1). Zero-understatement states were states that had no federal comparative surveys identifying missed deficiencies at the actual harm or immediate jeopardy levels; and low-understatement states were the 10 states with the lowest percentage of missed serious deficiencies (less than 6 percent), including all 7 zero-understatement states. Our May 2008 report also found that missed deficiencies at the potential for more than minimal harm level (D through F) were considerably more widespread than those at the G through L level on comparative surveys, with approximately 70 percent of comparative surveys nationwide identifying at least one missed deficiency at this level. Undetected care problems at this level are of concern because they could become more serious over time if nursing homes are not required to take corrective actions. Observational surveys. Federal surveyors accompany a state survey team to evaluate the team’s performance and ability to document survey deficiencies. State teams are evaluated in six areas, including two— General Investigation and Deficiency Determination—that affect the appropriate identification and citation of deficiencies. The General Investigation segment assesses the effectiveness of state survey team actions such as collection of information, discussion of survey observations, interviews with nursing home residents, and implementation of CMS investigative protocols. The Deficiency Determination segment evaluates the skill with which the state survey teams (1) analyze and integrate all information collected, (2) use the guidance for surveyors, and (3) assess compliance with regulatory requirements. Federal observational surveys are not independent evaluations of the state survey because state surveyors may perform their survey tasks more attentively than they would if federal surveyors were not present; however, they provide more immediate feedback to state surveyors and may help identify state surveyor training needs. We previously reported that state survey teams’ poor performance on federal observational surveys in the areas of General Investigation and Deficiency Determination may contribute to the understatement of deficiencies. Further, poor state performance in these two areas supported the finding of understatement as identified through the federal comparative surveys. We found that about 8 percent of state survey teams observed by federal surveyors nationwide received below-satisfactory ratings on General Investigation and Deficiency Determination from fiscal years 2002 through 2007. However, surveyors in high-understatement states performed worse in these two areas of the federal observational surveys than surveyors in the low-understatement states. For example, an average of 12 and 17 percent of state survey teams observed by federal surveyors in high-understatement states received below satisfactory ratings for these two areas, respectively. In contrast, an average of 4 percent of survey teams in low-understatement states received the same below-satisfactory scores for both deficiency determination and investigative skills. Nationwide, one-third of nursing homes had a greater average number of serious deficiencies on federal observational surveys than on state standard surveys during fiscal years 2002 through 2007, but in eight states, it was more than half of homes. Of the one-third of homes nationwide, state standard surveys cited 83 percent fewer serious deficiencies than federal surveys during this same time period. Over a third of both surveyors and state agency directors responding to our questionnaire identified weaknesses in the federal government’s nursing home survey process that contributed to the understatement of deficiencies. The weaknesses included problems with the current survey methodology; written guidance that is too long or complex; and to a lesser extent, survey predictability or other advance notice of inspections, which may allow nursing homes to conceal deficiencies. At the time our questionnaires were fielded, eight states had started implementing CMS’s new survey methodology. The limited experience among these states suggests that the new methodology may improve consistency of surveys, but information is limited, and the long-term ability of the new methodology to reduce understatement is not yet known. Both surveyors and state agency directors reported weaknesses in the survey process, and on our questionnaire linked these weaknesses to understatement of deficiencies. Nationally, 46 percent of nursing home surveyors responded that weaknesses in the current survey methodology resulted in missed or incorrectly identified deficiencies, with this number ranging by state from 0 to 74 percent (see table 2). Thirty-six percent of state agency directors responded that weaknesses in the current survey methodology at least sometimes contributed to understatement of deficiencies in their states. One such weakness identified by both surveyors and directors was the number of survey tasks that need to be completed. According to surveyors and agency directors responding to our questionnaire, another weakness with the federal survey process involved CMS’s written guidance to help state agencies follow federal regulations for surveying long-term care facilities. Both surveyors and state agency directors mentioned concerns about the length, complexity, and subjectivity of the written guidance. One state agency director we interviewed told us that the size of the SOM made it difficult for surveyors to carry the guidance and consult it during surveys. Although the SOM is available in an electronic format, surveyors in this state did not use laptops. In addition, a small percentage of surveyors commented on our questionnaire that CMS guidance was inconsistently applied in the field. A common complaint from these surveyors was that different supervisors required different levels of evidence in order to cite a deficiency at the actual harm or immediate jeopardy level. Forty percent of surveyors and 58 percent of state agency directors reported that additional training on how to apply CMS guidance was needed. A specific concern raised about the current survey guidance was determining the severity level for an observed deficiency. Forty-four percent of state agency directors reported on our questionnaire that confusion about CMS’s definition of the actual–harm level severity requirements at least sometimes contributed to understatement in their states. CMS’s guidance for determining actual harm states, “this does not include a deficient practice that only could or has caused limited consequence to the resident.” State agency directors from several states found this language confusing, including one director who said it is unclear whether conditions like dehydration that are reversed in the hospital should be cited as actual harm. As we reported in 2003, CMS officials acknowledged that the language linking actual harm to practices that have “limited consequences” for a resident has created confusion; however, the agency has not changed or revised this language. State agency directors and surveyors indicated that CMS’s written guidance for certain federal nursing home quality standards could be improved and that revised investigative protocols were helpful. Specifically, 11 state agency directors reported that CMS guidance on quality standards related to abuse could be improved. State agency directors commented that the guidance for certain quality standards was too long, with the guidance for two standards being over 50 pages long. One state agency director also noted that overly complex guidance will lead to an unmanageable survey process. Surveyors’ concerns about the sufficiency of CMS’s guidance varied for different quality standards (see table 3). For instance, 21 percent of surveyors nationwide reported that CMS guidance on pain management was not sufficient to identify deficiencies, whereas only 5 percent reported that guidance on pressure ulcers was not sufficient. Our analysis found that fewer surveyors had concerns with the guidance on quality standards revised through CMS’s guidance update initiative. For example, the guidance on pressure ulcers was revised in 2004 and the guidance on accidents was revised in 2007; these topics ranked last among the areas of concern. Furthermore, state agency directors from several states commented on the usefulness of CMS’s revised investigative protocols for federal quality standards. Another weakness associated with the federal survey process was the potential for surveys to be predictable based solely on their timing. Eighteen percent of state agency directors reported that survey predictability or other advance notice of inspections at least sometimes contributed to understatement in their states. We analyzed state agencies’ most-recent nursing home surveys and found that 29 percent of these surveys could be considered predictable due to their timing. We previously reported that survey predictability could contribute to understatement because it gives nursing homes the opportunity to conceal deficiencies if they choose to do so. CMS officials previously stated that reducing survey predictability could require increased funding because more surveys would need to be conducted within 9 months of the previous survey. However, CMS noted that state agencies are not funded to conduct any surveys within 9 months of the last standard survey. There was no consensus among the eight state agency directors who had started implementing the QIS as of November 2008 about how the new survey methodology would affect understatement. Three directors reported that the QIS was likely to reduce understatement; three directors reported that it was not likely to reduce understatement; and two directors were unsure or had no opinion (see fig. 2). However, all eight directors reported that the new QIS methodology was likely to improve survey consistency both within and across states. In addition, five of these directors reported that the new QIS methodology was likely to improve survey quality. Five of the eight directors also indicated that the QIS required more time than the traditional survey methodology. CMS funded an independent evaluation of the QIS, which was completed by a contractor in December 2007. The evaluation assessed the effectiveness of the new methodology by studying (1) its effect on accuracy of surveys, (2) documentation of deficiencies, (3) time required to complete survey activities, (4) number of deficiencies cited, and (5) surveyor efficiency. The evaluation did not draw a firm conclusion about the overall effectiveness of the QIS as measured through these five areas. For instance, the QIS methodology was associated with an increase in the total number of deficiencies cited, including an increase in the number of G-level deficiencies and the number of quality standard areas cited. However, the evaluation did not find that the QIS methodology increased survey accuracy, noting that QIS and traditional survey samples were comparable in overall quality and in the frequency of standards cited for deficiencies with either a pattern or widespread scope. The results suggested that more deficiencies with higher scope could have been cited for both the QIS and traditional surveys. Similarly, there was no evidence that the QIS resulted in higher-quality documentation or improved surveyor efficiency. Although five state agency directors reported that the QIS required more time to complete than the traditional methodology, the evaluation found some evidence of a learning curve, suggesting that surveyors were able to complete surveys faster as they became familiar with the new process. The evaluation generated a number of recommendations for improving the QIS that are consistent with reducing understatement, such as improving the specificity and usability of investigative protocols and evaluating how well the new methodology accurately identifies the areas in which there are potential quality problems. Since the evaluation did not find improved accuracy, CMS concluded that non-QIS factors, including survey guidance clarification and surveyor training and supervision, would help improve survey accuracy. Additionally, CMS concluded that future QIS development efforts should concentrate on improving survey consistency and giving supervisors more tools to assess the performance of surveyor teams. Ten state agency directors that had not yet started implementing the QIS responded to our questionnaire with concerns about the cost associated with implementing the new methodology, including the resources required to train staff and obtain new equipment. Of these 10 directors, 3 also expressed concerns that allotting staff time for QIS implementation would prevent the agency from completing mandatory survey activities. Workforce shortages and training inadequacies affected states’ ability to complete thorough surveys, contributing to understatement of nursing home deficiencies. Responses to our questionnaires indicated that states experienced workforce shortages or were attempting to accomplish their workload with a high percentage of inexperienced surveyors. In states with fewer staff to do the work, time frames were compressed. The increased workload burden may have had an effect on the thoroughness of surveys in those states and surveyors’ ability to attend training. The frequent hiring of new surveyors to address workforce shortages also burdened states’ surveyor training programs. Surveyors, state agency directors, and state performance on federal observational surveys indicated that inadequacies in initial and ongoing training may have compromised survey accuracy in high-understatement states. Although a small percentage of state agency directors reported that workforce shortages always or frequently contributed to the understatement of nursing home deficiencies in their states, 36 percent indicated that workforce shortages sometimes contributed to understatement (see table 4). In many states, workforce shortages resulted in a greater reliance on inexperienced surveyors. According to state agency directors and surveyors, this collateral effect—inexperienced surveyors—also may have contributed to understatement. States also expressed concern about completing their workload, which appeared to be, in part, an outgrowth of workforce shortages and use of inexperienced surveyors. Workforce Shortages. Since 2003, we have reported that states have experienced pervasive workforce shortages, and responses to our questionnaires indicate that shortages continue to affect states. Seventy- two percent of state agency directors reported that they always or frequently had a surveyor workforce shortage, and another 16 said it occurred sometimes. The average vacancy rate for surveyors was 14 percent, and one-fourth of states had a vacancy rate of higher than 19 percent (see table 5). Among the 49 reporting states, the vacancy rate ranged from a maximum of 72 percent in Alabama to 0 percent in Nevada, Rhode Island, Vermont, and Utah. The workforce shortages have stemmed mostly from the preference to employ RNs as surveyors in state survey agencies, with half of reporting states employing RNs as more than 75 percent of their surveyor workforce. In the past, states have claimed that they had difficulty matching RN salaries offered by the private sector, and this hampered the hiring and retention of RNs. The Virginia state agency director commented during an interview that the nursing home industry values individuals who have passed CMS’s SMQT and hires its surveyors after they are trained and certified by CMS. Virginia and others also identified the stress of the job—regular travel, time pressures to complete the workload, and the regulatory environment—as a challenge to retaining staff. Previously, we reported that workforce instability arising from noncompetitive RN surveyor salaries and hiring freezes affected states’ abilities to complete their survey workload or resulted in the hiring of less-qualified staff. Most recently, the poor economy has further constrained state budgets for surveyors. For example, to address its budget shortfall in 2009, California will furlough its state employees including surveyors for 2 days every month from February 2009 through June 2010. An additional 11 states also reported furloughs for 2009, and 13 are considering furloughs, salary reductions, or layoffs or will employ such measures in the future. Inexperienced Surveyors. Many states are attempting to accomplish their workload with a larger share of inexperienced surveyors, and state agency directors sometimes linked this reliance on inexperienced staff to the understatement of nursing home deficiencies. On average, 30 percent of surveyors had less than 2 years’ experience (see table 5); however the percentage of inexperienced surveyors ranged from 10 to 82 percent across states who reported this information. Among state agency directors, 16 percent indicated that inexperienced surveyors always or frequently contributed to understatement, while another 48 percent indicated that surveyor inexperience sometimes contributed to understatement in their states. In response to our questionnaires, 26 percent of surveyors indicated that survey teams always or frequently had too many inexperienced surveyors and another 33 percent indicated that sometimes survey teams had too many inexperienced surveyors (see table 6). Half or more of all surveyors in six states—Alabama, Alaska, Arizona, Idaho, New Mexico, and Utah—reported that there were always or frequently too many new surveyors who were not yet comfortable with their job responsibilities. For example, 79 percent of surveyors in Arizona reported that too many new surveyors were not comfortable with their job responsibilities, and the state agency director was among the 34 percent who reported that survey teams sometimes had an insufficient number of experienced surveyors. Overall, 26 percent of state agency directors indicated that the skill level of surveyors has decreased in the last 5 years. In interviews, six state agency directors commented that inexperienced surveyors possessed different skills or needed more time than experienced surveyors to complete surveys and that workforce shortages resulted in constant recruiting, over-burdened experienced surveyors, or the need for additional supervision and training resources. Four states—Kentucky, Nevada, New Mexico, and Virginia—reported not having enough dedicated training staff to handle the initial training for new surveyors. Workload. State inability to complete workload was, in part, an outgrowth of the workforce shortages and reliance on inexperienced surveyors. More than two-thirds of state agency directors reported on our questionnaire that staffing posed a problem for completing complaint surveys, and more than half reported that staffing posed a problem for completing standard or revisit surveys. In addition, 46 percent of state agency directors reported that time pressures always, frequently, or sometimes contributed to understatement in their states. In response to our questionnaire, 16 percent of surveyors nationwide reported that workload burden influenced the citation of deficiencies—including 14 states with 20 percent or more surveyors reporting the same. More than 50 percent of surveyors identified insufficient team size or time pressures as having an effect on the thoroughness of surveys. Surveyors’ comments reiterated these concerns—over 15 percent of surveyors who wrote comments complained about the amount of time allotted to complete surveys or survey paperwork, and 11 percent indicated that staffing was insufficient to complete surveys. One state agency director suggested to us that CMS establish a national team of surveyors to augment states’ when they fell behind on their workload or had staffing shortages. He thought the availability of national surveyors could assist states experiencing workforce shortages and help ensure state workloads were completed. This state had experience with a similar arrangement when it hired a national contractor to complete its surveys of Intermediate Care Facilities for the Mentally Retarded. Surveyors, state agency directors, and state performance on federal observational surveys indicated that inadequacies in initial or ongoing training may compromise the accuracy of nursing home surveys and lead to the understatement of deficiencies. In addition, workload affected surveyors’ ability to attend training. Initial Surveyor Training. As noted earlier, even though CMS has established specific training requirements, including coursework and the SMQT certification test, states are responsible for preparing their new surveyors for the SMQT. According to CMS, 94 percent of new surveyors nationally passed the SMQT test in 2008 and, on average, surveyors answered about 77 percent of the questions correctly. These results seem to support the state agency directors’ assertions that initial training was insufficient and suggest that the bar for passing the test may be set too low. Even though we cannot be certain whether the inadequacies are with the federal or state components of the training, reported differences among states in satisfaction with the initial surveyor training also could reflect gaps in state training programs. About 29 percent of surveyors in high-understatement states reported that initial training was not sufficient to cite appropriate scope and severity levels, compared with 16 percent of surveyors in low-understatement states (see table 7). Similarly, 28 percent of surveyors in high-understatement states, compared with 20 percent of those in low-understatement states, indicated that initial training was not sufficient to identify deficiencies for nursing homes. Further, 18 percent of state agency directors linked the occurrence of understatement always, frequently, or sometimes with insufficient initial training. From 16 to 20 percent of state agency directors indicated that initial training was insufficient to (1) enable surveyors to identify deficiencies and (2) assign the appropriate level of scope and severity. Ongoing Training. Ongoing training programs are the purview of state agencies; therefore, differences between states about the sufficiency of this training also may point to gaps in the state training programs. On our questionnaire, about 34 percent of surveyors in high-understatement states indicated a need for additional training on (1) identifying appropriate scope and severity levels and (2) documenting deficiencies. This was significantly more than those from low-understatement states, which indicated less of a need for additional training in these areas—16 and 27 percent, respectively. Among state agency directors, 10 percent attributed understatement always or frequently to insufficient ongoing training, while 14 percent indicated that insufficient ongoing training sometimes gave rise to understatement. Although 74 percent of state agency directors indicated that the state had ongoing annual training requirements, the required number of hours and the type of training varied widely by state in 2007. Among the 33 states that provided the required amount of annual state training, these hours ranged from 0 to 120 hours per year. Meanwhile, 37 states reported one or more type of required training: 32 states required surveyors to attend periodic training, 22 required on-the-job training, 10 required online computerized training, and 13 states required some other type of training. State agency directors indicated that they relied on CMS materials for ongoing training of experienced surveyors, yet many reported additional training needs and suggested that use of electronic media could make continuing education and new guidance more accessible. While 98 percent of states indicated that the CMS written guidance materials and resources were useful, over 50 percent of all state agency directors identified additional training needs in documenting deficiencies, citing deficiencies at the appropriate scope and severity level, and applying CMS guidance. On federal observational surveys, an average of 17 to 12 percent of survey teams in high-understatement states received below-satisfactory ratings for Deficiency Determination and General Investigation, respectively—two skills critical for preventing understatement. In contrast, an average of 4 percent of survey teams in low-understatement states received the same below-satisfactory scores for both deficiency determination and investigative skills. Furthermore, of the 476 surveyors who commented about training needs, one-quarter indicated a need for training support from either CMS or state agencies; and between 12 to 7 percent of those who commented on training needs identified topics such as: documenting deficiencies, identifying scope and severity, CMS guidance, and medical knowledge. Inability to Attend Training. States’ workload requirements and workforce shortages affected the surveyors’ ability to attend initial and ongoing training. Seven of the eight state agency directors we interviewed linked workforce shortages and resource constraints to their state’s ability to complete the survey workload or allow staff to participate in training courses. One director stated that workload demands compromised comprehensive training for new staff, and another reported difficulty placing new staff in CMS’s initial training programs. Due to workload demands, a third state agency director stated that she could not allow experienced staff time away from surveying to attend training courses even when staff paid their own way. Five of the seven state agency directors suggested that it would be more efficient for training activities to be conducted more locally such as in their states or to be available through online, video, or other electronic media, and several emphasized the need to reduce or eliminate travel for training. Although four states also expressed a preference for interactive training opportunities, one state believed that technological solutions could allow for more accessible training that was also interactive. State supervisory reviews, which generally occurred more frequently on higher-level deficiencies, often are not designed to identify understated deficiencies. State agencies generally conducted more supervisory reviews on surveys with higher-level deficiencies, compared to surveys with deficiencies at the potential for more than minimal harm level (D through F)—the deficiencies most likely to be understated. While focus on higher- level deficiencies enables states to be certain that such deficiencies are well documented, not reviewing surveys with deficiencies at lower levels represents a missed opportunity to ensure that all serious deficiencies are cited. State surveyors who reported having frequent changes made to their survey reports during supervisory reviews also more often reported they were burdened by other factors contributing to understatement, such as workforce shortages and survey methodology weaknesses. According to state agency directors’ responses to our questionnaire, states generally focused supervisory review on surveys with higher-level deficiencies, rather than on the surveys with deficiencies at the potential for more than minimal-harm level (D through F)—the deficiencies most likely to be understated. During supervisory reviews, either direct-line supervisors or central state agency staff may review draft survey records. On average, surveys at the D through F level underwent about two steps of review, while surveys with deficiencies at the immediate jeopardy level (J through L) went through three steps. For example, Washington reviews its surveys using either a two-step review that includes survey team and field manager reviews or a three-step process that includes both these reviews and an additional review by central state agency staff for serious deficiencies. As a result, central state agency staff in Washington do not review deficiencies below the level of actual harm. In addition we found that five states—Alaska, Hawaii, Illinois, Nebraska, and Nevada— did not review all surveys with deficiencies at the D through F levels. In fact, Hawaii did not report supervisory review of deficiencies at any level (see fig. 3). It is difficult to know if additional supervisory reviews—the second, third, or fourth review—help make survey records more accurate and less likely to be understated, or if these reviews result in more frequent changes to deficiency citations. However, if deficiency citations with the potential for more than minimal-harm level (D through F) are not reviewed, states miss the opportunity to assess whether these deficiencies warrant a higher-level citation, for example, the level of actual harm or immediate jeopardy. Because a majority of states are organized into geographically-based district or regional offices, review by central state agency staff, particularly quality assurance staff, is critical to help ensure consistency and detect understatement. However, 26 states reported that no central state agency staff reviews were conducted for surveys with deficiencies at the potential for more than minimal harm (D through F). These results are consistent with a finding from our 2003 report—that half of the 16 states we contacted for that report did not have a quality assurance process to help ensure that the scope and severity of less serious deficiencies were not understated. According to most of the eight state officials we interviewed, supervisory reviews commonly focused on documentation principles or evidentiary support, not on reducing understatement. For example, all eight states used supervisory reviews to assess the accuracy and strength of the evidence surveyors used to support deficiency citations, and three of these states reported that they emphasized reviewing survey records for documentation principles. Furthermore, seven out of eight states indicated that surveys with serious deficiencies—those that may be subject to enforcement proceedings—went through additional steps of review compared with surveys citing deficiencies with the potential for more than minimal harm (D through F). Surveyor reports of changes to deficiency citations during supervisory reviews may be related to other factors the state is experiencing that also contribute to understatement, such as workforce shortages and survey methodology weaknesses. Changes to Deficiencies. Fifty-four percent of surveyors nationwide reported on our questionnaire that supervisors at least sometimes removed the deficiency that was cited, and 53 percent of surveyors noted that supervisors at least sometimes changed the scope and severity level of cited deficiencies. Of the surveyors, who reported that supervisors sometimes removed deficiencies, 13 percent reported that supervisors always or frequently removed deficiencies—including 12 states with 20 percent or more of their surveyors reporting that deficiencies were removed. Surveyor reports of changes in deficiency citations alone make it difficult to know whether the original deficiency citation or the supervisor’s revised citation was a more accurate reflection of a nursing home’s quality of care. Additionally, there are many reasons that survey records might be changed during supervisory review. When a surveyor fails to provide sufficient evidence for deficient practices, it may be difficult to tell whether the deficiency was not appropriately cited or if the surveyor did not collect all the available evidence. Kentucky’s state agency director offered one possible explanation—that changes to surveys often reflected a need for more support for the deficiencies cited, such as additional evidence from observations. Nevada’s state agency director stated that changes to survey records occurred when it was often too late to gather more evidence in support of deficiencies. Surveyors who reported that supervisors frequently changed deficiencies also more often reported experiencing other factors that contribute to understatement. We found associations between surveyor reports of changes to deficiencies and workforce shortages and survey methodology weaknesses. Workforce shortages. Surveyors reporting workforce shortages, including survey teams with too many new surveyors and survey teams that were either too small or given insufficient time to conduct thorough surveys, more often also reported that supervisors frequently removed deficiencies or changed the scope and severity of deficiency citations during supervisory reviews. Survey methodology weaknesses. Surveyors reporting weaknesses in the current survey methodology more often also reported that supervisors frequently removed deficiencies or changed the scope and severity of deficiency citations during supervisory reviews. Supervisory Reviews and Understatement. In certain cases, survey agency directors and state performance on federal comparative surveys linked supervisory reviews to understatement. Twenty-two percent of state agency directors attributed inadequate supervisory review processes to understatement in their states at least sometimes. In addition, significant differences existed between zero-understatement states and all other states, including high-understatement states, in the percentage of surveyors reporting frequent changes to citations during supervisory reviews. Only about 4 percent of surveyors in zero-understatement states reported that citations were always or frequently removed or changed and that the scope and severity cited were changed, while about 12 percent of surveyors in all other states indicated the same (see table 8). To address concerns with supervisory reviews, Nevada recently reduced its process from two steps to a single step review by survey team supervisors to address surveyor complaints about changes made during supervisory reviews. In addition, we observed a relationship between state practices to notify surveyors of changes made during supervisory reviews and surveyor reports of deficiency removal and explanation of changes. Specifically, compared to surveyors in states that require supervisors to notify surveyors of changes made during supervisory review, surveyors from states where no notification is required reported more often that supervisors removed deficiencies and less often that explanations for these changes, when given, were reasonable. Similarly, we found an association between the frequency of explained and reasonable changes and zero-understatement states, possibly demonstrating the positive effect of practices to notify surveyors of changes made during supervisory reviews. Nursing home surveyors from zero-understatement states more often reported that supervisors explained changes and that their explanations seemed reasonable compared to surveyors in all other states. State agency directors in Massachusetts and New Mexico stated that explanations of changes to the survey record provided opportunities for one-on-one feedback to surveyors and discussions about deficiencies being removed. Nursing home surveyors and state agency directors in a minority of states told us that in isolated cases issues such as a state agency practice of noncitation, external pressure from the nursing home industry, and an unbalanced IDR process may have led to the understatement of deficiencies. In a few states, surveyors more often identified problems with noncitation practices and IDR processes compared to state agency directors. Yet, a few state agency directors acknowledged either noncitation practices, external pressure, or an IDR process that favored nursing home operators over resident welfare. Although not all the issues raised by surveyors were corroborated by the state agency directors in their states, surveyor reports clustered in a few states gives credence to the notion that such conditions may lead to understatement. Approximately 20 percent of surveyors nationwide and over 40 percent of surveyors in five states reported that their state agency had at least one of the following noncitation practices: (1) not citing certain deficiencies, (2) not citing deficiencies above a certain scope and severity level, and (3) allowing nursing homes to correct deficiencies without receiving a citation (see fig. 4). Only four state agency directors acknowledged the existence of such practices in their states on our questionnaire and only one of these directors was from the five states most often identified by surveyors. One of these directors commented on our questionnaire that one of these practices occurs only in “rare individual cases.” Another director commented that a particular federal quality standard is not related to patient outcome and therefore should not be cited above a Level F. According to CMS protocols, when noncompliance with a federal requirement has been identified, the state agency should cite all deficiencies associated with the noncompliance. CMS regional officials we interviewed were not aware of any current statewide noncitation practices. Not citing certain deficiencies. Nationally, 9 percent of surveyors reported a state agency practice that surveyors not cite certain deficiencies. However, in four states over 30 percent of surveyors reported their state agency had this noncitation practice, including over 60 percent of New Mexico surveyors. In some cases, surveyors reported receiving direct instructions from supervisors not to cite certain deficiencies. In other cases, surveyors’ reports of noncitation practices may have been based on their interpretation of certain management practices. For instance, surveyors commented that some state agency practices—such as providing inadequate time to observe and document deficiencies or frequently deleting deficiency citations during supervisory review— seemed like implicit or indirect leadership from the agency to avoid citing deficiencies. One state agency director we interviewed agreed that surveyors may report the existence of noncitation practices when their citations are changed during supervisory review. This official told us that when surveyors’ deficiencies are deleted or downgraded, the surveyors may choose not to cite similar deficiencies in the future because they perceive being overruled as an implicit state directive not to cite those deficiencies. Not citing deficiencies above a certain scope and severity level. Although nationwide less than 8 percent of surveyors reported a state agency practice that surveyors not cite deficiencies above a certain scope and severity level, in two states over 25 percent of surveyors reported that their state agency used this type of noncitation practice. One reason state agencies might use this noncitation practice could be to help manage the agency’s workload. In particular, citing deficiencies at a lower scope and severity might help the agency avoid additional work associated with citing higher-level deficiencies, such as survey revisits or IDR. In one of the two states mentioned above, 54 percent of surveyors indicated that the workload burden influenced their citations. Additionally, as we described earlier, 16 percent of surveyors nationwide indicated that workload burden influenced the citation of deficiencies and more than half of state agency directors (including those from the two states mentioned above) responded that staffing was not sufficient to complete revisit surveys. While our questionnaire focused on not citing deficiencies above a certain scope and severity level, a few surveyors commented on being discouraged from citing lower-level deficiencies due to time pressures to complete surveys. Agency officials in two states told us that surveyors may miss some deficiencies due to limited survey time and resources. Allowing nursing homes to correct deficiencies without citing them on the survey record. Nationwide, approximately 12 percent of surveyors reported this type of noncitation practice. However, in five states, at least 30 percent of surveyors reported their state agency allowed nursing homes to correct deficiencies without citing those deficiencies on the official survey record. Comments from surveyors suggest that state agencies may use this type of practice to avoid actions that nursing homes or the industry would dispute or interpret as excessive. Similarly, several surveyors commented that they were instructed to cite only one deficiency for a single type of negative outcome, even when more than one problem existed. However, CMS guidance requires state agencies to cite all problems that lead to a negative outcome. The decrease in G-level citations that occurred after CMS implemented the double G immediate sanctions policy in January 2000 also suggests that some states may have avoided citing deficiencies that would result in enforcement actions for the nursing home. The total number of G-level deficiency citations nationwide dropped from approximately 10,000 in 1999 to 7,700 in 2000. State agency directors from 12 states reported experiencing external pressure from at least one of the following stakeholder groups: (1) the nursing home surveyed, (2) the nursing home industry, or (3) state or federal legislators. Examples of such external pressure include pressure to reduce federal or state nursing home regulation or to delete specific deficiencies cited by the state agency. Of the 12 state agency directors, 7 reported that external pressure at least sometimes contributed to the understatement of deficiencies in their states, while the other 5 indicated that it infrequently or never contributed to understatement. Adversarial attitude toward nursing home surveys. Two states we interviewed—State A and State B—commented on the adversarial attitude that industry and legislative representatives had toward nursing home surveys at times. For instance, state agency officials from State A told us that the state nursing home association organized several forums to garner public and legislative support for curtailing state regulation of facilities. According to officials in this state, the influential industry groups threatened to request legislation to move the state agency to a different department and to deny the confirmation of the director’s gubernatorial appointment if the citations of G level or higher deficiencies increased. CMS regional office officials responsible for State A told us that the state may be experiencing more intense external pressure this year given the current economy, because providers have greater concerns about the possible financial implications of deficiency citations—fines or increased insurance rates. Similarly, officials from State B told us that when facilities are close to termination, the state agency receives phone calls from state delegates questioning the agency’s survey results. Officials from State B also told us that the Governor’s office instructed the state agency not to recommend facilities for enforcement actions. Officials from the CMS regional office responsible for State B told us that this situation was not problematic because CMS was ultimately responsible for determining enforcement actions based on deficiency citations. However, this regional office’s statement is inconsistent with (1) language in the SOM that calls for states to recommend enforcement actions to the regional office, and (2) assertions from the regional office responsible for State A that it infrequently disagrees with state recommendations for sanctions. A third state agency director commented that the agency had been called before state legislative committees in 2007-2008 to defend deficiency citations that led to the termination of facilities. A fourth state agency director also commented on our questionnaire that legislators had pressured the state agency on behalf of nursing homes to get citations reduced or eliminated and prevent enforcement actions for the facilities. In addition, a few surveyors commented that at times when nursing homes were unhappy with their survey results the homes or their state legislators would ask state agency management to remove the citations from the survey record, resulting in the deletion or downgrading of deficiencies. Further, comments from a few surveyors indicated that they may steer clear of citing deficiencies when they perceive the citation might cause a home to complain or exert pressure for changes in the survey record. Interference in the survey process. In a few cases, external pressure appeared to directly interfere with the nursing home survey process. State agency officials from two states—State A and an additional fifth state— reported that state legislators or industry representatives had appeared on- site during nursing home surveys. Although in some cases the legislators just observed the survey process, officials from these two states explained that third parties also have interfered with the process by questioning or intimidating surveyors. The state agency director from the fifth state commented on our questionnaire that the nursing home industry sent legal staff on-site during surveys to interfere with the survey process. Similarly, officials from State A told us that during one survey, a home’s lawyer was on-site reviewing nursing home documentation before surveyors were given access to these documents. Officials from State A also told us that state legislators have attended surveys to question surveyors about their work and whether state agency executives were coercing them to find deficiencies. We discussed this issue with the CMS regional officials responsible for State A, who acknowledged that this type of interference had occurred. States’ need for support from CMS. In the face of significant external pressure, officials from States A and B suggested that they need support from CMS; however, CMS regional office officials did not always acknowledge external pressure reported by the states. This year, State A terminated a survey due to significant external pressure from a nursing home and requested that the CMS regional office complete the revisit survey for them. Six weeks later, the federal team completed the survey and found many of the same problems that this state team had previously identified before it stopped the survey. Officials from State A suggested the need for other support as well, such as creating a federal law that would require state agencies to report external pressure and ensure whistleblower protections for state officials who report pressure and allowing sanctions for inappropriate conduct. CMS officials from the regional office responsible for State A stated that external pressure might indirectly contribute to understatement by increasing surveyor mistakes from the additional stress, workload, focus on documentation, and supervisory reviews. Conversely, CMS regional officials did not acknowledge that State B experienced external pressure and officials from State B thought that CMS should be more consistent in its requirements and enforcement actions. States with unbalanced IDR processes may experience more understatement. IDR processes vary across states in structure, volume of proceedings, and resulting changes. According to state agency directors’ responses to our questionnaire, 16 IDRs were requested per 100 homes in fiscal year 2007, with this number ranging among states from 0 to 57 per 100 homes. For IDRs occurring in fiscal year 2007, 20 percent of disputed deficiencies were deleted and 7 percent were downgraded in scope or severity, but in four states, at least 40 percent of disputed deficiencies were deleted through this process. CMS does not provide protocols on how states should operate their IDR processes, leaving IDR operations to state survey agencies’ discretion. For example, states may choose to conduct IDR meetings in writing, by telephone, or through face-to-face conferences. State agencies also have the option to involve outside entities, including legal representation, in their IDR operations. On the basis of responses from surveyors and state agency directors clustered in a few states, problems with the IDR processes—such as frequent hearings, deficiencies that are frequently deleted or downgraded through the IDR process, or outcomes that favor nursing home operators over resident welfare—may have contributed to the understatement of deficiencies in those states. Although reports of such problems were not common—only 16 percent of surveyors nationwide reported on our questionnaire that their state’s IDR process favored nursing home operators—in four states over 40 percent of surveyors reported that their IDR process favored nursing home operators (see fig. 5), including one state where a substantial percentage of surveyors identified at least one noncitation practice. While only one state agency director reported that the IDR process favored nursing home operators, three other directors acknowledged that frequent IDR hearings at least sometimes contributed to the understatement of deficiencies. For example, in some states surveyors may hesitate to cite deficiencies that they believe will be disputed by the nursing home. In isolated cases, a lack of balance with the IDR process appeared to be a result of external pressure. In one state, the state agency director reported that the nursing home industry sent association representatives to the IDR, which increased the contentiousness of the process. In another state, officials told us that a large nursing home chain worked with the state legislature to set up an alternative to the state IDR process, which has been used only by facilities in this chain. Through this alternative appeals process, both the state agency and the nursing home have legal representation, and compliance decisions are made by an adjudicator. According to agency officials in this state, the adjudicators for this alternative appeals process do not always have health care backgrounds. While CMS gives states the option to allow outside entities to conduct the IDR, the states should maintain ultimate responsibility for IDR decisions. CMS regional officials stated it would not consider the outcome of this alternative appeals process when assessing deficiencies or determining enforcement actions. Regardless, these actions may have affected surveyors’ perceptions of the balance of the states’ IDRs, because over twice the national average of surveyors in this state reported that their IDR process favored nursing home operators. Reducing understatement is critical to protecting the health and safety of vulnerable nursing home residents and ensuring the credibility of the survey process. Federal and state efforts will require a sustained, long- term commitment because understatement arises from weaknesses in several interrelated areas—including CMS’s survey process, surveyor workforce and training, supervisory review processes, and state agency practices and external pressure. Concerns about CMS’s Survey Process. Survey methodology and guidance are integral to reliable and consistent state nursing home surveys, and we found that weaknesses in these areas were linked to understatement by both surveyors and state agency directors. Both groups reported struggling to interpret existing guidance, and differences in interpretation were linked to understatement, especially in determining what constitutes actual harm. Surveyors noted that the current survey guidance was too lengthy, complex, and subjective. Additionally, they had fewer concerns about care areas for which CMS has issued revised interpretive protocols. In its development of the QIS, CMS has taken steps to revise the nursing home survey methodology. However, development and implementation of the QIS in a small group of states has taken approximately 10 years, and full implementation of the new methodology is not expected to be completed until 2014. The experience of the QIS was mixed regarding improvement in the quality of surveys, and the independent evaluation generated a number of recommendations for improving the QIS. CMS concluded that it needed to focus future QIS development efforts on improving survey consistency and giving supervisors more tools to assess performance of surveyor teams. Ongoing Workforce and Surveyor Training Challenges. Workforce shortages in state survey agencies increase the need for high-quality initial and ongoing training for surveyors. Currently, high vacancy rates can place pressure on state surveyors to complete surveys under difficult circumstances, including compressed time frames, inadequately staffed survey teams, and too many inexperienced surveyors. States are responsible for hiring and retaining surveyors and have grappled with pervasive and intractable workforce shortages. State agency directors struggling with these workforce issues reported the need for more readily accessible training for both their new and experienced surveyors that did not involve travel to a central location. Nearly 30 percent of surveyors in high-understatement states stated that initial surveyor training, which is primarily a state activity that incorporates two CMS on-line computer courses and a 1-week federal basic training course culminating in the SMQT, was not adequate to identify deficiencies and cite them at the appropriate scope and severity level. State agency directors reported that workforce shortages also impede states’ ability to provide ongoing training opportunities for experienced staff and that additional CMS online training and electronic training media would help states maintain an experienced, well-informed workforce. They noted that any such support should be cognizant of states’ current resource constraints, including limited funding of travel for training. Supervisory Review Limitations. Currently, CMS provides little guidance on how states should structure supervisory review processes, leaving the scope of this important quality-assurance tool exclusively to the states and resulting in considerable variation throughout the nation in how these processes are structured. We believe that state quality assurance processes are a more effective preventive measure against understatement because they have the potential to be more immediate and cover more surveys than the limited number of federal comparative surveys conducted in each state. However, compared to reviews of serious deficiencies, states conducted relatively fewer reviews of deficiencies at the D through F level, those that were most frequently understated throughout the nation, to assess whether or not such deficiencies were cited at too low a scope and severity level. In addition, we found that frequent changes to survey results made during supervisory review were symptomatic of workforce shortages and survey methodology weaknesses. For example, surveyors who reported that survey teams had too many new surveyors, more often also reported either frequent changes to or removals of deficiencies during supervisory reviews—indicating that states with inexperienced workforces may rely more heavily on supervisory reviews. In addition, variation existed in the type of feedback surveyors receive when deficiencies are changed or removed during supervisory reviews, providing surveyors with inconsistent access to valuable feedback and training. CMS did not implement our previous recommendation to require states to have a quality assurance process that includes, at a minimum, a review of a sample of survey reports below the actual harm level to assess the appropriateness of the scope and severity cited and help reduce understatement. State Agency Practices and External Pressure. In a few states, noncitation practices, challenging relationships with the industry or legislators, or unbalanced IDR processes—those that surveyors regard as favoring nursing home operators over resident welfare—may have had a negative effect on survey quality and resulted in the citation of fewer nursing home deficiencies than was warranted. In one state, both the state agency director and over 40 percent of surveyors acknowledged the existence of a noncitation practice such as allowing a home to correct a deficiency without receiving a citation. Forty percent of surveyors in four other states also responded on our questionnaire that noncitation practices existed. Currently, CMS does not explicitly address such practices in its guidance to states, and its oversight is limited to reviews of citation patterns, feedback from state surveyors, state performance reviews, and federal monitoring surveys to determine if such practices exist. Twelve state agency directors reported on our questionnaire experiencing some kind of external pressure. For example, in one state a legislator attended a survey and questioned surveyors as to whether state agency executives were coercing them to find deficiencies. Under such circumstances, it is difficult to know if the affected surveyors are consistently enforcing federal standards and reporting all deficiencies at the appropriate scope and severity levels. States’ differing experiences regarding the enforcement of federal standards and collaboration with their CMS regional offices in the face of significant external pressure also may confuse or undermine a thorough and independent survey process. If surveyors believe that CMS does not fully or consistently support the enforcement of federal standards, these surveyors may choose to avoid citing deficiencies that they perceive may trigger a reaction from external stakeholders. In addition, deficiency determinations may be influenced when IDR processes are perceived to favor nursing home operators over resident welfare. Because many aspects of federal and state operations contribute to the understatement of deficiencies on nursing home surveys, mitigating this problem will require the concerted effort of both entities. The interrelated nature of these challenges suggests a need for increased CMS attention on the areas noted above and additional federal support for states’ efforts to enforce federal nursing home quality standards. To address concerns about weaknesses in CMS survey methodology and guidance, we recommend that the Administrator of CMS take the following two actions: make sure that action is taken to address concerns identified with the new QIS methodology, such as ensuring that it accurately identifies potential quality problems; and clarify and revise existing CMS written guidance to make it more concise, simplify its application in the field, and reduce confusion, particularly on the definition of actual harm. To address surveyor workforce shortages and insufficient training, we recommend that the Administrator of CMS take the following two actions: consider establishing a pool of additional national surveyors that could augment state survey teams or identify other approaches to help states experiencing workforce shortages; evaluate the current training programs and division of responsibility between federal and state components to determine the most cost- effective approach to: (1) providing initial surveyor training to new surveyors, and (2) supporting the continuing education of experienced surveyors. To address inconsistencies in state supervisory reviews, we recommend that the Administrator of CMS take the following action: set an expectation through guidance that states have a supervisory review program as a part of their quality-assurance processes that includes routine reviews of deficiencies at the level of potential for more than minimal harm (D-F) and that provides feedback to surveyors regarding changes made to citations. To address state agency practices and external pressure that may compromise survey accuracy, we recommend that the Administrator of CMS take the following two actions: reestablish expectations through guidance to state survey agencies that noncitation practices—official or unofficial—are inappropriate, and systematically monitor trends in states’ citations; and establish expectations through guidance to state survey agencies to communicate and collaborate with their CMS regional offices when they experience significant pressure from legislators or the nursing home industry that may affect the survey process or surveyors’ perceptions. We provided a draft of this report to HHS and AHFSA for comment. In response, the Acting Administrator of CMS provided written comments. CMS noted that the report adds value to important public policy discussions regarding the survey process and contributes ideas for solutions on the underlying potential causes of understatement. CMS fully endorsed five of our seven recommendations and indicated it would explore alternate solutions to our remaining two recommendations, one of which the agency did not plan to implement on a national scale. (CMS’s comments are reprinted in appendix II.) AHFSA’s comments noted that several states agreed with one of our recommendations, but did not directly express agreement or disagreement with the other recommendations. AHFSA made several other comments on our findings and recommendations as summarized below. CMS agreed with five of our recommendations that called for: (1) addressing issues identified with the new QIS methodology, (2) evaluating current training programs, (3) setting expectations that states have a supervisory review program, (4) reestablishing expectations that noncitation practices are inappropriate, and (5) establishing expectations that states communicate with their CMS regional office when they experience significant pressure from legislators or the nursing home industry. In its comments, the agency cited several ongoing efforts as mechanisms for addressing some of our recommendations. While we acknowledge the importance of these ongoing efforts, in some areas we believe more progress and investigation are likely needed to fully address our findings and recommendations. For example, we recommended that CMS ensure that measures are taken to address issues identified with the new QIS methodology, such as ensuring that it accurately identifies potential quality problems; CMS’s response cited Desk Audit Reports that enable supervisors to provide improved feedback to surveyors and quarterly meetings of a user group as evidence of efforts under way to continuously improve the QIS and to increase survey consistency. However, we noted that a 2007 evaluation of the QIS did not find improved survey accuracy compared to the traditional survey process and recommended that CMS evaluate how well the QIS accurately identifies areas in which there were potential quality problems. While improving the consistency of the survey process is important, CMS must also focus on addressing the accuracy of QIS surveys. For the remaining two recommendations, CMS described alternative solutions that it indicated the agency would explore: Guidance. The agency agreed in principle with our recommendation to clarify and revise existing written guidance to make it more concise, simplify its application in the field, and reduce confusion. However, CMS disagreed with shortening the guidance as the preferred method for achieving such clarification. Instead, the agency suggested an alternative— the creation of some short reference documents for use in the field that contain cross-links back to the full guidance—that we believe would fulfill the intent of our recommendation. National surveyor pool. CMS indicated it did not plan to implement our recommendation to consider establishing a pool of additional national surveyors that could augment state survey teams experiencing workforce shortages, at least not on a national scale. The agency stated that the establishment of national survey teams was problematic for several reasons, including that it (1) began to blur the line between state accountability for meeting performance expectations and compensating states for problematic performance due to state management decisions, and (2) was improper for CMS to tell states how to make personnel decisions While the agency noted that it used national contractors to perform surveys for other types of facilities such as organ transplant centers, it expressed concern about their use to compensate for state performance issues because of the more frequent nursing home surveys. We believe that state workforce shortages are a separate issue from state performance on surveys. Since 2003, we have reported pervasive state workforce shortages and this report confirms that such shortages continue. For example, we reported that one-fourth of states had vacancy rates higher than 19 percent and that one state reported a 72 percent vacancy rate. We also believe that addressing workforce shortages is critical to creating an effective system of oversight for nursing homes and reducing understatement throughout the nation. However, CMS noted that it would explore this issue with a state-federal work group in order to identify any circumstances in which a national pool may be advisable and to identify any additional solutions. Reflecting this comment from CMS, we have revised our original recommendation to include other potential solutions as well as a national pool of surveyors. One suggestion in AHFSA comments may be worth exploring in this regard—providing funds to state survey agencies for recruitment and retention activities. AHFSA commented that vigorous oversight and enforcement are essential to improving the quality of life and quality of care for health care consumers and are critical if improvements already achieved are to be maintained. The association noted that several states agreed with our recommendation on the need for CMS to revise existing written guidance to make it more concise. While the association did not directly express agreement or disagreement with our other recommendations, it did note that most states would need additional funding to meet any new staffing requirements associated with our recommendation that CMS set an expectation for states to have a supervisory review program. However, AHFSA noted what it considered to be conflicting assertions within the report. For example, it noted that we cited inexperienced staff as a factor that contributes to understatement but also appeared to take issue with the practice of supervisors changing reports prepared by inexperienced staff. While our report identifies a wide variety of factors that may contribute to understatement, we did not and could not meaningfully prioritize among these factors based on the responses of nursing home surveyors and state agency directors. We did find that many states were attempting to accomplish their survey workload with a large share of inexperienced surveyors and that state agency directors sometimes linked this reliance on inexperienced staff to the understatement of nursing home deficiencies. In addition, we found that frequent changes made during supervisory review were symptomatic of workforce shortages and survey methodology weaknesses. For example, surveyors who reported that survey teams had too many new surveyors, more often also reported either frequent changes to or removals of deficiencies during supervisory reviews. We believe that state quality assurance processes have the potential to play an important role in preventing understatement, which may result in states with inexperienced workforces relying more heavily on supervisory reviews. AHFSA also stated that our report did not address limitations of federal monitoring surveys, specifically the potential inconsistency among CMS regional offices in how these surveys are conducted. Assessing CMS’s performance on federal monitoring surveys was beyond the scope of this report. However, our May 2008 report noted several improvements CMS had made since fiscal years 2002 and 2003 in federal comparative surveys intended to make them more comparable to the state surveys they are assessing; these improvement include; (1) reducing the time between the state and federal surveys to ensure that they more accurately capture the conditions at the time of the state survey, (2) including at least half of the residents from state survey investigative samples to allow for a more clear- cut determination of whether the state survey should have cited a deficiency, and (3) using the same number of federal surveyors as the corresponding state survey, again to more closely mirror the conditions under which the state survey was conducted. Finally, AHFSA questioned whether the information that we received from surveyors about the IDR process was universally valid because their input about quality assurance reviews might be biased. Our methodology did not rely solely on surveyor responses to our questionnaire but used a separate questionnaire sent to state survey agency directors to help corroborate their responses. Thus we reported both that (1) over 40 percent of surveyors in four states indicated that their IDR process favored nursing home operators and (2) one state survey agency director agreed and three others acknowledged that frequent IDR hearings sometimes contributed to the understatement of deficiencies. We also collected and reported data on the number of deficiencies modified or overturned, which AHFSA said was a more accurate measure of the effect of IDRs. We also incorporated technical comments from AHFSA as appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Administrator of the Centers for Medicare & Medicaid Services and appropriate congressional committees. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. This appendix describes the data and methods we used to identify the factors that contribute to the understatement of serious deficiencies on nursing home surveys. This report relies largely on the data collected through (1) two GAO-administered Web-based questionnaires to nursing home surveyors and state agency directors and (2) analysis of federal and state nursing home survey results as reported in the federal monitoring survey database and the On-Line Survey, Certification, and Reporting (OSCAR) system. Summary results from the GAO questionnaires are available as an e-supplement to this report. See Nursing Homes: Responses from Two Web-Based Questionnaires to Nursing Home Surveyors and State Agency Directors (GAO-10-74SP), an E-supplement to GAO-10-70. To augment our quantitative analysis, we also interviewed officials at the Centers for Medicare & Medicaid (CMS) Survey and Certification Group and select regional offices; reviewed federal regulations, guidance, and our prior work; and conducted follow-up interviews with eight state agency directors and a select group of surveyors. Except where otherwise noted, we used data from fiscal year 2007 because they were the most recently available data at the time of our analysis. We developed two Web-based questionnaires—one for the nursing home surveyors and one for the state agency directors. The questionnaires were developed and the data collection and analysis conducted to (1) minimize errors arising from differences in how a particular question might be interpreted and in the sources of information available to respondents and (2) reduce variability in responses that should be qualitatively the same. GAO social science survey specialists aided in the design and development of both questionnaires. We pretested the two questionnaires with six surveyors from a local state and five former or current state agency directors, respectively. Based on feedback from these pretests, the questionnaires were revised to improve clarity and the precision of responses, and ensure that all questions were fair and unbiased. Most questions were closed-ended, which limited the respondent to answers such as yes or no, or to identifying the frequency that an event occurred using a scale—always, frequently, sometimes, infrequently, or never. For reporting purposes, we grouped the scaled responses into three categories—always/frequently, sometimes, and infrequently/never. Both questionnaires included some open-ended questions to allow respondents to identify specific training needs or other concerns. With few exceptions, respondents entered their responses directly into the Web-based questionnaire databases. These questionnaires were sent to the eligible population of nursing home surveyors and all state agency directors. We performed computer analyses to identify illogical or inconsistent responses and other indications of possible error. We also conducted follow-up interviews with select respondents to clarify and gain a contextual understanding of their responses. This questionnaire was designed to gather information from nursing home surveyors nationwide about the process for identifying and citing nursing home deficiencies. It included questions about various aspects of the survey process identified by our prior work that may contribute to survey inconsistency and the understatement of deficiencies. Such aspects included survey methodology and guidance, deficiency determination, surveyor training, supervisory review of draft surveys, and state agency policies and procedures. We fielded the questionnaire from May through July 2008 to 3,819 eligible nursing home surveyors. To identify the eligible population, we downloaded a list of identification numbers for surveyors who had conducted at least one health survey of a nursing home in fiscal years 2006 or 2007 from CMS’s OSCAR database and we obtained surveyors’ e-mail addresses from state survey agencies. We received complete responses from 2,340 state surveyors, for a 61 percent response rate. The state-level response rates were above 40 percent for all but three states— Connecticut, Illinois, and Pennsylvania. We excluded Pennsylvania from our analysis because Pennsylvania’s Deputy Secretary for Quality Assurance instructed the state’s surveyors not to respond to our survey and few responded. (For response rates by state, see table 9.) The questionnaire for state agency directors was designed to gather information on the nursing home survey process in each state. Directors were asked many of the same questions as the surveyors, but the survey agency directors’ questionnaire contained additional questions on the overall organization of the survey agency, resource and staffing issues, CMS’s Quality Indicator Survey (QIS), and experience with CMS’s federal monitoring surveys. In addition, the questionnaire for state agency directors asked them to rank the degree to which several factors, derived from our previous work, contributed to understatement. This questionnaire was fielded from September to November 2008 to all 50 state survey agency directors and the survey agency director for the District of Columbia. We received completed responses from 50 of 51 survey agency directors, for a 98 percent response rate. The District of Columbia survey agency director did not respond. To analyze results from the survey questions among groups, we used standard descriptive statistics. In addition, we looked for associations between questions through correlations and tests of the differences in means for groups. For certain open-ended questions, we used a standard content review method to identify topics that respondents mentioned such as “applying CMS guidance,” “on-the-job training,” “time to complete survey onsite,” or “time to complete the survey paperwork.” Our coding process involved one independent coder and an independent analyst who verified a random sample of the coded comments. For open-ended questions that enabled respondents to provide additional general information, we used similar standard content review methods, including independent coding by two raters who resolved all disagreements through discussion. In addition to the precautions taken during the development of the questionnaires, we performed automated checks on these data to identify inappropriate answers. We also reviewed the data for missing or ambiguous responses. Where comments on open-ended questions provided more detail or contradicted responses to categorical questions, the latter were corrected. On the basis of the strength of our systematic survey processes and follow-up procedures, we determined that the questionnaire responses were representative of the experience and perceptions of nursing home surveyors and state agency directors nationally and at the state level, with the exception of Pennsylvania surveyors and the survey agency director of the District of Columbia. On the basis of the response rates and these activities, we determined that the data were sufficiently reliable for our purposes. We also interviewed directors and other state agency officials in eight states to better understand unusual or interesting circumstances related to surveyor workforce and training, supervisory review, or state policies and practices. We selected these eight states based on our analysis of questionnaire responses from state agency directors and nursing home surveyors. We used information from our May 2008 report on federal comparative surveys nationwide for fiscal years 2002 through 2007 to categorize states into groups. We used these results to identify states with high and low percentages of serious missed deficiencies. We classified nine states as high-understatement states—those that had 25 percent or more federal comparative surveys identifying at least one missed deficiency at the actual harm or immediate jeopardy levels across all years. These states were Alabama, Arizona, Missouri, New Mexico, Oklahoma, South Carolina, South Dakota, Tennessee, and Wyoming. Zero-understatement states were those that had no federal comparative surveys identifying missed deficiencies at the actual harm or immediate jeopardy levels. These seven states were Alaska, Idaho, Maine, North Dakota, Oregon, Vermont, and West Virginia. Low-understatement states were the 10 with the lowest percentage of missed serious deficiencies (less than 6 percent)—Arkansas, Nebraska, Ohio, and all seven zero-understatement states. Response rates among the high-, low-, and zero-understatement states— approximately 77, 62, and 71 percent, respectively—supported statistical testing of associations and differences among these state groupings. Therefore, in addition to descriptive statistics, we used correlations and tests of the differences in means for groups to identify questionnaire responses that were associated with differences in understatement. We reported the statistically significant results for tests of association and differences between group averages at the 5 percent level, unless otherwise noted. In a previous report, we found a possible relationship between the understatement of nursing home deficiencies on the federal comparative surveys and surveyor performance in General Investigation and Deficiency Determination on federal observational surveys—that is, high- understatement states more often had below-satisfactory ratings in General Investigation and Deficiency Determination than low- understatement states. For this report, we applied the same statistical analysis to identify when responses to our questionnaires were associated with satisfactory performance on General Investigative and Deficiency Determination skills on the federal observational surveys. We interpreted such relationships as an indication of additional training needs. We used information from OSCAR and the federal monitoring survey databases to (1) compare the deficiencies cited by state and federal surveyors, (2) analyze the timing of nursing home surveys, and (3) assess trends in deficiency citations. OSCAR is a comprehensive database that contains information on the results of state nursing home surveys. CMS reviews these data and uses them to compute nursing home facility and state performance measures. When we analyzed these data, we included automated checks of data fields to ensure that they contain complete information. For these reasons, we determined that the OSCAR data were sufficiently reliable for our purposes. We used OSCAR and the federal monitoring survey database to compare average facility citations on state survey records with the average citations on federal observational survey records for the same facilities during fiscal years 2002 through 2007. We computed the average number of serious deficiencies cited on federal observational surveys between fiscal years 2002 through 2007, and for the same facilities and time period, calculated the average number of serious deficiencies cited on state surveys. Next, we determined which facilities had greater average serious deficiency citations on federal observational surveys compared to state standard surveys between fiscal years 2002 through 2007. For these facilities, we computed the percentage difference between the average number of serious deficiencies cited on federal observational surveys and those cited on state surveys. We used OSCAR to determine the percentage of the most recent state surveys that were predictable because of their timing. Our analysis of survey predictability compared the time between state agencies’ current and prior standard nursing home surveys as of June 2008. According to CMS, states consider 9 months to 15 months from the last standard survey as the window for completing standard surveys because it yields a 12- month average. We considered surveys to be predictable if (1) homes were surveyed within 15 days of the 1-year anniversary of their prior survey or (2) homes were surveyed within 1 month of the maximum 15-month interval between standard surveys. We calculated the number of serious deficiencies on state surveys in OSCAR from calendar year 1999 through 2007. We examined the trend in G-level and higher deficiencies to assess whether CMS’s expanded enforcement policy appeared to affect citation rates. Effective January 2000, CMS completed the implementation of its immediate-sanctions policy, requiring the referral of homes that caused actual harm or immediate jeopardy on successive standard surveys or intervening complaint investigations. In addition to the contact named above, Walter Ochinko, Assistant Director; Stefanie Bzdusek; Leslie V. Gordon; Martha R. W. Kelly; Katherine Nicole Laubacher; Dan Lee; Elizabeth T. Morrison; Dan Ries; Steve Robblee; Karin Wallestad; Rachael Wojnowicz; and Suzanne Worth made key contributions to this report. Nursing Homes: Opportunities Exist to Facilitate the Use of the Temporary Management Sanction. GAO-10-37R. Washington, D.C.: November 20, 2009. Nursing Homes: CMS’s Special Focus Facility Methodology Should Better Target the Most Poorly Performing Homes, Which Tended to Be Chain Affiliated and For-Profit. GAO-09-689. Washington, D.C.: August 28, 2009. Medicare and Medicaid Participating Facilities: CMS Needs to Reexamine Its Approach for Funding State Oversight of Health Care Facilities. GAO-09-64. Washington, D.C.: February 13, 2009. Nursing Homes: Federal Monitoring Surveys Demonstrate Continued Understatement of Serious Care Problems and CMS Oversight Weaknesses. GAO-08-517. Washington, D.C.: May 9, 2008. Nursing Home Reform: Continued Attention Is Needed to Improve Quality of Care in Small but Significant Share of Homes. GAO-07-794T. Washington, D.C.: May 2, 2007. Nursing Homes: Efforts to Strengthen Federal Enforcement Have Not Deterred Some Homes from Repeatedly Harming Residents. GAO-07-241. Washington, D.C.: March 26, 2007. Nursing Homes: Despite Increased Oversight, Challenges Remain in Ensuring High-Quality Care and Resident Safety. GAO-06-117. Washington, D.C.: December 28, 2005. Nursing Home Quality: Prevalence of Serious Problems, While Declining, Reinforces Importance of Enhanced Oversight. GAO-03-561. Washington, D.C.: July 15, 2003. Nursing Homes: Quality of Care More Related to Staffing than Spending. GAO-02-431R. Washington, D.C.: June 13, 2002. Nursing Homes: Sustained Efforts Are Essential to Realize Potential of the Quality Initiatives. GAO/HEHS-00-197. Washington, D.C.: September 28, 2000. Nursing Home Care: Enhanced HCFA Oversight of State Programs Would Better Ensure Quality. GAO/HEHS-00-6. Washington, D.C.: November 4, 1999. Nursing Homes: Proposal to Enhance Oversight of Poorly Performing Homes Has Merit. GAO/HEHS-99-157. Washington, D.C.: June 30, 1999. Nursing Homes: Additional Steps Needed to Strengthen Enforcement of Federal Quality Standards. GAO/HEHS-99-46. Washington, D.C.: March 18, 1999. California Nursing Homes: Care Problems Persist Despite Federal and State Oversight. GAO/HEHS-98-202. Washington, D.C.: July 27, 1998.
Under contract with the Centers for Medicare and Medicaid Services (CMS), states conduct surveys at nursing homes to help ensure compliance with federal quality standards. Over the past decade, the Government Accountability Office (GAO) has reported on inconsistencies in states' assessment of nursing homes' quality of care, including understatement--that is, when state surveys fail to cite serious deficiencies or cite them at too low a level. In 2008, GAO reported that 9 states had high and 10 had low understatement based on CMS data for fiscal years 2002 through 2007. This report examines the effect on nursing home deficiency understatement of CMS's survey process, workforce shortages and training, supervisory reviews of surveys, and state agency practices. GAO primarily collected data through two Web-based questionnaires sent to all eligible nursing home surveyors and state agency directors, achieving 61 and 98 percent response rates, respectively. A substantial percentage of both state surveyors and directors identified general weaknesses in the nursing home survey process, that is, the survey methodology and guidance on identifying deficiencies. On the questionnaires, 46 percent of surveyors and 36 percent of directors reported that weaknesses in the traditional survey methodology, such as too many survey tasks, contributed to understatement. Limited experience with a new data-driven survey methodology indicated possible improvements in consistency; however, an independent evaluation led CMS to conclude that other tools, such as survey guidance clarification and surveyor training and supervision, would help improve survey accuracy. According to questionnaire responses, workforce shortages and greater use of surveyors with less than 2 years' experience sometimes contributed to understatement. Nearly three-quarters of directors reported that they always or frequently experienced a workforce shortage, while nearly two-thirds reported that surveyor inexperience always, frequently, or sometimes led to understatement. Substantial percentages of directors and surveyors indicated that inadequate training may compromise survey accuracy and lead to understatement. According to about 29 percent of surveyors in 9 high understatement states compared to 16 percent of surveyors in 10 low understatement states, initial surveyor training was not sufficient to cite appropriate scope and severity--a skill critical in preventing understatement. Furthermore, over half of directors identified the need for ongoing training for experienced surveyors on both this skill and on documenting deficiencies, a critical skill to substantiate citations. CMS provides little guidance to states on supervisory review processes. In general, directors reported on our questionnaire that supervisory reviews occurred more often on surveys with higher-level rather than on those with lower-level deficiencies, which were the most frequently understated. Surveyors who reported that survey teams had too many new surveyors also reported frequent changes to or removal of deficiencies, indicating heavier reliance on supervisory reviews by states with inexperienced surveyors. Surveyors and directors in a few states informed us that, in isolated cases, state agency practices or external pressure from stakeholders, such as the nursing home industry, may have led to understatement. Forty percent of surveyors in five states and four directors reported that their state had at least one practice not to cite certain deficiencies. Additionally, over 40 percent of surveyors in four states reported that their states' informal dispute resolution processes favored concerns of nursing home operators over resident welfare. Furthermore, directors from seven states reported that pressure from the industry or legislators may have compromised the nursing home survey process, and two directors reported that CMS's support is needed to deal with such pressure. If surveyors perceive that certain deficiencies may not be consistently upheld or enforced, they may choose not to cite them.
Insurers, state insurance regulators, and NAIC all have roles that are important to the continued functioning of the insurance sector and to U.S. consumers and businesses. Insurers provide services that allow individuals and businesses to manage risk by providing compensation for certain losses or expenses, such as car crashes, fires, medical services, or inability to work. Some insurers also provide access to certain financial services, such as annuities and mutual funds. State insurance regulators are responsible for enforcing state insurance laws and regulations, including through the licensing of agents, the approval of insurance products and their rates, and the examination of insurers’ financial solvency and market conduct. State regulators typically conduct financial solvency examinations every 3 to 5 years, while market conduct examinations are generally done in response to specific consumer complaints or regulatory concerns. State regulators also monitor the resolution of consumer complaints against insurers. NAIC is a voluntary association of the heads of insurance departments from the 50 states, the District of Columbia, and five U.S. territories. While NAIC does not regulate insurers, it does provide services designed to make certain interactions between insurers and regulators more efficient. These services include providing detailed insurance data to help regulators understand insurance sales and practices; maintaining a range of databases useful to regulators; and coordinating regulatory efforts by providing guidance, model laws and regulations, and information-sharing tools. Insurance companies are regulated by the states, unlike the banking and securities industries, which are regulated under a dual federal-state oversight system. In addition to critical functions such as oversight of insurers’ financial solvency, state insurance regulation involves key regulatory processes, including: licensing insurance producers, including insurance agents, brokers, and companies; reviewing and approving insurance products and rates; and reviewing and examining insurers’ market conduct. Licensing producers consists of reviewing license applications to sell insurance products, reviewing applicants’ criminal and regulatory background, if any, and approving or denying applications and issuing licenses. During product approval processes, regulators review insurers’ products and rates, in some cases, before they enter the market for sale to consumers. Regulators review policy forms, which are legal contracts that describe the characteristics of the products insurers intend to sell and the rates or prices they intend to charge, and then grant or deny product approval. Not all products are subject to prior approval. Regulators’ market conduct oversight involves protecting consumers by monitoring and examining the conduct of insurance producers. To fulfill this role, state regulators analyze information that they periodically collect on the marketing and sales behavior of insurers in order to identify any problems. Regulators also conduct periodic market conduct examinations to investigate insurers’ market behaviors in greater depth. Regulators may issue findings and work with insurers on corrective actions identified as a result of market analysis and market conduct examinations. NAIC assists state regulators in their efforts to oversee the insurance industry and serves regulators with a variety of functions. While NAIC does not have regulatory authority over state insurance departments, it collects, stores, and analyzes detailed insurance data to help regulators understand insurance sales and practices. NAIC data and databases provide information that regulators can use during their producer licensing, product approval, and market conduct processes. NAIC also helps states coordinate regulatory efforts by providing guidance, model and recommended laws and regulations, and information-sharing tools. State legislatures may implement NAIC’s model laws by passing model laws or substantively similar legislation in the states. NAIC generally operates through a system of working groups, task forces, and committees made up of state regulators and NAIC officials that identify issues, facilitate interstate communication, and propose regulatory improvements. These entities meet periodically to discuss issues, build consensus on reforms, and vote to adopt new standards, model laws, and model regulations. These processes are cooperative but often take months or years to complete because of the number of participants and diversity of priorities involved. In addition to these functions, NAIC also developed and implemented a financial accreditation program in 1990 to periodically review state insurance departments for baseline financial solvency oversight standards. Accreditation standards require state insurance departments to have adequate statutory and administrative authority to regulate insurers’ corporate and financial affairs. NAIC is considering, but has not yet developed, an accreditation program for regulation of insurers’ market conduct. In 2003, NAIC created its Modernization Plan to highlight areas in which NAIC and state regulators planned improvements for oversight of the insurance industry. The plan reinforced the primary goals of protecting consumers and creating a competitive and responsive insurance market. For producer licensing, the plan sought to implement a uniform electronic licensing system for individuals and business entities that sell insurance. The Modernization Plan specifically called for implementation of a single, uniform license application and full implementation of an electronic fingerprint system as part of the licensing process. For product approval, NAIC and state regulators planned to fully implement and use SERFF for product filings. They also planned to develop an interstate compact to provide a central point of filing for certain life and annuity products that would be accepted across states and would feature uniform national product standards. For market conduct regulation, the plan noted the need for a common set of standards for uniform market regulatory oversight that includes all states. In particular, it called for each state to adopt uniform market analysis standards and procedures, improve interstate collaboration, and integrate market analysis with other key regulatory functions. Since 2000, we have issued a number of reports on state regulators’ oversight of the insurance industry, including reports on improving regulatory efficiency, uniformity, and reciprocity in the areas of producer licensing, product approval, and market conduct regulation. We have made numerous recommendations to the NAIC and states concerning the lack of full criminal background checks by insurance regulators during licensing processes, the need for more uniform product approvals across states, and the difficulties insurance regulators face for sharing information, including with regulators from other parts of the financial services sector. NAIC generally concurred with these recommendations, and stated that they would take steps to address them. These reports have also recognized the importance of establishing uniform minimum market conduct standards that are consistently used across states. See Related GAO Products for a list of relevant insurance reports that we issued between 2000 and 2008. In a recent report looking at the broader financial regulatory system, we developed a framework for assessing the strengths and weaknesses of proposals for regulatory modernization that included a number of goals relevant to a discussion of reciprocity and uniformity of insurance regulation. Specifically, any financial regulatory system should for example: be flexible and able to readily adapt to innovations and changes; be efficient and effective, eliminating overlap and minimizing regulatory burden while effectively achieving regulatory goals; provide consumers with consistent protections for similar financial products and services, including sales practice standards; and provide consistent financial oversight, with similar institutions, products, risks, and services subject to consistent regulation, which would harmonize oversight within the United States and internationally. In addition, we reported that given the difficulties to harmonize insurance regulation across states through the NAIC-based structure, Congress could consider the advantages and disadvantages of providing a federal charter option for insurance and creating a federal insurance regulatory entity. NAIC and state regulators have taken steps to increase reciprocity in producer licensing across states, but challenges remain. Increased reciprocity and uniformity have been goals for Congress, NAIC, state insurance regulators, and insurers for a number of years, especially since passage of GLBA in 1999. Following passage of the act, NAIC and state regulators worked to develop the PLMA and various NAIC and state licensing standards. According to NAIC, since GLBA, most states have passed and implemented the PLMA, which set licensing standards for states to follow in order to meet reciprocity and uniformity requirements. However, the small number of states performing full producer background checks with fingerprinting remains a barrier to greater reciprocity and uniformity. States that perform the checks may be unwilling to reciprocate with those that do not for fear of compromising their consumer protection laws. In addition, different licensing requirements and insurance line definitions across states have also limited reciprocity and uniformity. A lack of reciprocity and uniformity in producer licensing could lead to regulatory inefficiencies, higher insurance costs, and uneven consumer protections. GLBA’s passage in 1999 and the subsequent development of the PLMA by NAIC and state regulators provided the framework and impetus for reciprocity and uniformity in producer licensing processes among states. If within three years of GLBA’s enactment, at least 29 states did not either pass uniform or reciprocal laws and regulations governing the licensure of individuals and entities authorized to sell insurance, GLBA called for the preemption of certain state producer licensing laws and the potential formation of a federal regulatory body for insurers. Following passage of the act, NAIC and states elected to pursue the reciprocity option, with uniformity as a longer-term goal for producer licensing. To help states meet GLBA’s reciprocity requirements, NAIC developed PLMA to help address differences among states in the areas of defining insurance products and lines, agent licensing standards, and variations in state licensing applications. The act was intended to streamline and standardize producer licensing requirements across states and improve the efficiency of insurance licensing processes. To respond to state differences in defining types or lines of insurance and general inefficiency in licensing processes, PLMA sought reciprocity and uniformity by specifying standard definitions for six major types or lines of insurance for use across states. The act also provided for reciprocal recognition across states of continuing education requirements for producers, another area in which state requirements had previously varied. In addition, in 2002 NAIC developed the Uniform Resident Licensing Standards (URLS) to help states implement the reciprocity requirements of GLBA and PLMA. The URLS address some items that, according to NAIC officials, were not included in PLMA, such as definitions for limited lines of insurance. Further, the URLS provide the professional standards for industry entry and continuation of licensure for insurers, as well as administrative standards for regulators to achieve uniformity and increased efficiencies. These professional standards are segmented into broad categories: integrity and personal background checks, surplus lines. The background checks, in particular, call for states to fingerprint their new producers and conduct state and federal background checks on applicants. NAIC also developed the Uniform Application for Individual Insurance Producer License, which created a standardized application form that state regulators could use for insurance licenses. Use of the form helps ensure that regulators would have a single producer license application for use across states rather than multiple forms and documentation for individual states. In reports in 2002 and 2004 on insurance regulation, we noted that despite efforts made by NAIC and state regulators to implement GLBA and PLMA and create more uniform standards and processes, remaining differences among the states may limit full reciprocity and uniformity. For example, we previously found that some states are not willing to lower producer licensing standards—such as eliminating criminal background checks using fingerprint identification— to allow for uniform or reciprocal licensing, and few states have the ability to access nationwide criminal history data necessary for full background checks on applicants. In addition, we found that state insurance regulators could improve consumer protection by sharing regulatory and complaint information between financial services regulators. In 2003, NAIC developed its Modernization Plan to provide a roadmap for progress and centralize producer licensing and other regulatory goals, thus promoting uniformity and reciprocity among the states. The plan essentially incorporated the goals of the PLMA, the URLS, and the Uniform Application. The plan also specifically sought to promote producer licensing uniformity and reciprocity by calling for background checks on insurance license applicants using electronic fingerprinting. Not only would such checks provide uniform background reviews across states, but they were intended to help state regulators ensure consistently high levels of consumer protection. To achieve greater use of criminal background checks with fingerprinting, the Modernization Plan also called for efforts by NAIC and state insurance regulators to recognize the important role of federal and state legislatures to pass legislation that would provide state insurance regulators the appropriate statutory authority necessary for conducting such checks. As of March 2009, NAIC has certified 47 states and jurisdictions as reciprocal based on their adoption and implementation of PLMA, which is integral to achieving the reciprocity and uniformity envisioned by the Modernization Plan. Passage of PLMA and certification by NAIC suggest that these 47 states and jurisdictions have similar producer licensing processes and standards in place and are reciprocal in their treatment of insurers who wish to sell insurance products across those states. Figure 1 provides a map of states and jurisdictions NAIC has certified as reciprocal. To be considered reciprocal for producer licensing in states beyond the home state where an initial license was granted, states must meet four conditions. First, states must permit producers with a license in their home state to sell insurance without satisfying any other additional requirements other than submitting: a request for licensure, the application used for licensure in the home state, proof of licensure and good standing in the home state, and any requisite fee. Second, states must accept the home state’s continuing education requirements. Third, states must not impose any other requirements for licensure that would limit insurance activities because of place of residence. And fourth, each state that meets these criteria must grant licensing reciprocity to insurers of all other states that also meet the criteria. NAIC certifies states as reciprocal based on the criteria above. Once certified as reciprocal, states have the continuing obligation to remain compliant, and NAIC has a continuing obligation to certify states’ compliance. According to NAIC officials, as of December 2008, all states were using the Uniform Application for non-resident applicants. Use of this form, according to NAIC and industry participants, has helped make licensing more uniform across states. Insurance industry officials said that the form has been of particular benefit to insurers with agents operating in multiple states, as they no longer are required to fill out different forms for each state. While passage of PLMA and certification of 47 states and jurisdictions as reciprocal represent progress in the area of producer licensing, several key states such as New York, California, and Florida have not been certified by NAIC as reciprocal because they generally have not accepted the producer licensing standards of other states—a condition of certification. An NAIC official noted that reasons for a lack of full reciprocity and uniformity among states include conflict among existing state laws and legislative and industry opposition to full fingerprint-based criminal background checks. In addition, the certification process does not include a review of whether states are also complying with the URLS, which added some standards that were not included in PLMA but which NAIC believed were important for meaningful uniformity and reciprocity. According to NAIC officials, they plan to incorporate these additional standards in future certification efforts. Other limits on or barriers to complete reciprocity and uniformity include the limited number of states performing fingerprint background checks and differences in state licensing requirements and insurance line definitions. While criminal background checks are not required for reciprocal licensing arrangements between states, state insurance regulators have a responsibility to review insurance applications and prevent criminals from being licensed. Though some state insurance regulators have sought authority to conduct full background checks using fingerprinting, NAIC officials noted that only 17 states as of March 2009 were performing full nationwide criminal history checks using fingerprinting as part of their licensing programs. With only 17 states performing such checks, reciprocity and uniformity of producer licensing across states may be limited when states that perform fingerprint checks do not accept licenses granted by states that do not perform such checks. Figure 2 is a map of states that conduct these background checks. States unable to perform these checks generally cite lack of statutory authority as a primary reason. Specifically, state insurance regulators and other industry participants noted that regulators have had difficulty getting state legislatures to grant the authority for insurance departments to access the law enforcement databases needed to review applicants for nationwide criminal records. Some states and industry officials reported industry group opposition as a primary reason for lack of movement by legislatures to grant full background check authority and noted that further progress by the states without Congress granting such authority was unlikely. NAIC also noted that Federal Bureau of Investigation administrative standards related to fingerprinting have also been a barrier. Other states noted the importance of criminal background checks and mentioned that other entities like the Securities and Exchange Commission have the authority under federal law to conduct full background checks with fingerprinting as part of their process to license those who wish to sell securities products. Officials from some of the states in our sample also noted that many states are unwilling to reciprocate with other states that do not conduct such checks and expressed concern that doing so would diminish their consumer protections. Without the ability to conduct full criminal background checks, state regulators are less likely to detect applicants with criminal background and deter them from obtaining licenses. State regulators’ inability to thoroughly review applicants’ regulatory background also hinders efforts to efficiently license applicants across multiple states. Some insurance regulators from our sample noted that they have the ability to access NAIC databases such as the Regulatory Information Retrieval System (RIRS) and the Special Activities Database (SAD) to check applicants’ background, but they reported being unable to systematically perform regulatory history checks by querying the disciplinary records of separate systems used by banking and securities regulators. Regulatory history checks by insurance regulators consist of efforts to determine whether insurance applicants have a history of consumer complaints or regulatory enforcement actions. Accessing this history may be difficult because banking, securities, and insurance regulators maintain regulatory background information in separate information technology systems. Some insurance regulators noted that no systematic query function or mechanism exists that would enable insurance and other regulators to check enforcement and complaint information in these separate systems in order to review information from across the three financial services sectors. Without the ability to share regulatory history data, insurance regulators may be less able to detect applicants with prior regulatory issues or histories of consumer complaints and prevent their entry into the insurance industry from other states or other parts of the financial services sector. Table 1 provides examples from the state of California of criminal convictions that were identified through fingerprint-based criminal background checks but that applicants for insurance licenses did not disclose on their insurance license applications. Despite progress to move reciprocity and uniformity forward through GLBA, PLMA, and the URLS, a lack of uniform producer licensing standards across states has limited reciprocity and uniformity. Some industry groups suggested that even NAIC-certified reciprocal states have additional or different requirements and processes for applicants. In some states, definitions of insurance lines and the number of insurance lines vary. For example, some states have followed the PLMA to define major lines with variable life and variable annuity as one line, requiring only one license to sell both types of products. Other states treat them separately as individual lines, and may require separate licenses for each. In addition, some states use limited lines definitions beyond the five categories defined in the URLS, which could require agents to obtain additional licenses. For example, some states have a line for luggage protection, which is not one of the URLS categories. The effect is that insurers experience the time, cost, and inefficiency that result from following different application processes and standards to meet individual state requirements. Some states also have additional requirements that may limit reciprocity and uniformity. For example, some states require business entity applicants to register with secretary of state offices before state insurance departments issue producer licenses, but others do not. In addition, some states have different license renewal periods for resident and non-resident insurers. Without more uniform licensing standards, some insurers suggest they will continue to experience greater time and administrative cost burdens that may get passed on to consumers. According to some industry participants in our sample, while reciprocity and uniformity of producer licensing across states have grown, the differences in background checks, licensing requirements, and insurance line definitions across states prevent them from achieving full reciprocity and uniformity, and may impact regulators and consumers. For example, differences in state regulators’ review of applicants’ backgrounds could lead to uneven consumer protection across states, and the resulting lack of reciprocity among states could lead to less regulatory efficiency, as some states do not recognize licenses obtained in other states. Insurers and industry associations have suggested that the different licensing requirements and insurance line definitions in some states could also create inefficiencies as agents operating in multiple states would have to meet different requirements in each state. These inefficiencies could result in higher costs for insurers, which in turn could be passed on to consumers. According to NAIC officials, NAIC and state regulators have taken steps to make product approval more efficient, but barriers to greater reciprocity and uniformity exist in the areas of product review and states’ approval processes. NAIC, working with state regulators, has set goals for increasing reciprocity and uniformity in product approval that address inefficiencies in product filing and review and encourage an open and competitive insurance market. NAIC and state regulators have standardized the initial filing process by creating an automated system that states and insurance producers may use for filing submissions, which is the first part of product approval. However, some states still have individual filing review and follow-up practices that work against uniformity. To improve the speed and efficiency of product approval, NAIC and some states have developed an Interstate Compact with standardized procedures for approval, but only for certain lines of insurance and only for participating states. Without greater reciprocity and uniformity of insurance product approval, regulatory inefficiencies may raise costs for insurance producers and result in less product choice and higher costs for consumers. NAIC has made progress in its efforts to make product approval processes more uniform and reciprocal by creating its 2003 Modernization Plan, which outlined written goals for improving the speed and efficiency of product approval across states. The plan gives insurance regulators a guide for improving reciprocity and uniformity of insurance oversight across states. Specifically, the plan calls for interstate collaboration and reforms to make filing more efficient so that states can improve the timeliness and quality of their reviews of insurance product filings. The plan also sought to integrate multi-state regulatory procedures with individual state regulatory requirements. For example, all states would use regulatory tools, such as uniform filing transmittal documents and a review standards checklist, which would allow insurance companies to verify state requirements before making filings. The plan also called for creating the Interstate Compact, which would develop uniform national product standards and provide a central point of filing. The Compact would use the standards to receive filings, review their contents, and facilitate approvals that would be honored by all participating states. NAIC, in conjunction with state regulators, developed SERFF to simplify, automate, and standardize the way insurers develop and submit insurance product rate and form filings. According to NAIC, SERFF provides a fast, simplified, electronic process for filing and provides filing checklists for those submitting filings. As of March 2009, SERFF was used by 52 states and jurisdictions. Florida does not use SERFF but does require electronic filing for product approval submissions. According to NAIC officials, the frequency of SERFF filing use has also grown, with filings increasing from around 7,000 in 2001 to around 550,000 at the end of 2008. Officials noted that as of March 2009, approximately 85 percent of all product filings to state insurance departments had occurred through the system. Since SERFF implemented a standardized, electronic process and checklists for required filing materials and processes, NAIC and some state regulators and industry associations reported that filing submission errors had decreased significantly and approvals of filings that did not require revisions had increased dramatically. While SERFF has resulted in product filing improvements, several limitations to greater reciprocity and uniformity remain. First, while SERFF provided a uniform way for insurers to submit product filings, states’ processes for reviewing and following up on filings may involve different procedures and approaches, and some states may require varying levels of additional documentation on products. Second, state regulators and their staffs may have varying levels of resources and expertise and their own informal ways of conducting reviews that create different approaches among states. Some states and one insurer in our sample reported that these differences might work against product approval reciprocity and uniformity. Specifically, the groups suggested that these individual state approaches, or “desk drawer” practices, may be inefficient because they represent fragmented ways of reviewing the same or very similar products. Some industry participants reported that the effect of desk drawer practices increases the time it takes insurers to get their products approved for sale, raising costs for consumers. However, such practices may also allow regulators to target efforts to protect consumers based on state-specific concerns and issues. According to NAIC, many state insurance regulators now participate in the Interstate Compact for multistate insurance product approval. According to NAIC and the Interstate Insurance Product Regulation Commission (IIPRC), NAIC and state regulators created the framework for the Compact in 2000 and developed a working group for the Compact and an Interstate Insurance Compact Model Law in 2002. The Compact was formally created when the first two states, Colorado and Utah, enacted legislation required at the state level to allow each state to join the Compact. NAIC noted that the IIPRC, an organization that manages the operations of the Compact, was established in May 2006, and in December 2006 the IIPRC adopted its first uniform product standards. It operates as a multistate public entity and serves as a single point for filing, review, and approval of life insurance, annuity, disability income, and long-term care insurance products. Once approved by the IIPRC, products may be sold in all member states. The Compact was developed to make processes for filing, reviewing, and approving certain insurance products more efficient and effective, and it aimed to promote uniformity through national product standards and processes. According to NAIC, as of March 2009, Compact membership consisted of 34 states and insurance jurisdictions (fig. 3). The IIPRC started receiving and reviewing product filings in 2007, and as of March 2009, NAIC reported that the number of filings made through the compact was relatively small but growing, with a mix of large and small states participating. While the Compact has provided more centralized and streamlined processes, several key states have not joined. In particular, key regulatory states like New York, Florida, and California have not joined. According to some industry participants, states that have not joined the Compact generally feel there might be a loss of consumer rights and remedies with Compact-approved products. In addition, some industry participants noted that states that have not joined the Compact may feel their current product approval and consumer protection processes are superior to the Compact’s processes. Some industry participants, however, say that without fuller participation by states, full reciprocity and uniformity, even for the limited number of insurance lines covered by the Compact, will be difficult to achieve. Some states in our sample have formed their own alternative for product approval. Officials from the states of California and Texas noted that several states and jurisdictions, including California, Florida, Texas, and the District of Columbia, formed the Multi-State Review Program, which expedites product approval for annuity products using standards agreed upon by the program’s participants. However, a system comprised of compacting states, non-compacting states, and states forming their own approval arrangements has raised some concerns that multiple product approval systems may work against uniformity. Other issues with the Compact may also limit reciprocity and uniformity. According to some industry participants, some Compact processes allow states to make their own decisions regarding the nature of the products being approved. For example, under the Compact, states can decide whether they will allow fraud exceptions to life insurance policy incontestability clauses, or provisions in life insurance policies that generally limit insurers to a period of 2 years to determine whether policyholders misrepresented their health status and information. These clauses generally protect insurers against misinformation that might be provided by policyholders and consumers against insurers that might deny policyholders coverage despite collecting years of premium payments. According to some industry participants, uniformity will be difficult to achieve if states have the ability to make individualized decisions that do not apply across multiple states. Further, two consumer groups expressed concern that the commission lacks transparency and accountability. First, these groups pointed out that product filings to the commission are not public, in contrast with some states that require publicly available insurance product filings. With the commission, product information does not become public until or unless products are approved. As a result, consumer groups suggested that they and others do not have an opportunity to review filings and help ensure that harmful products do not get approved for market. However, one state and one insurer offered a number of reasons for not having public filings, including (1) protection of insurers’ proprietary information, (2) potential misuse of competitors’ information in marketing or lawsuits, and (3) the possibility that consumers will not understand the information or might be misled by it. For example, one insurer noted that product filings can be complicated and are written for regulatory review rather than to inform consumers. However, consumer groups suggested that without greater review of proposed products, it would be difficult for advocates to help protect consumers and hard for the general public to be informed buyers of insurance products. Second, consumer groups expressed concerns that the lack of public filings would make it much harder for consumer groups and the public to identify how suitable potential new products might be for consumers before they are approved. These groups noted that the commission’s standards provide uniformity for filing submission, review, and approval, but do not address issues of consumer suitability. And third, consumer groups questioned what recourse the commission offered consumers in the event that an insurance product harmed the public. The groups noted that it was unclear whether consumers could sue the commission the same way they might sue a state if an insurance product harmed them. According to some industry participants, while the Compact has increased reciprocity of approvals for some life, annuity, and long-term care insurance products, similar reciprocity is unlikely for property/casualty insurance products. According to some regulatory officials, the products covered by the Compact lend themselves to more uniform approval standards and processes because they are “mobile products,” such as life insurance policies, that can move with consumers and are less subject to local geographic characteristics such as weather, earthquakes, or urban versus rural environments. In addition, property/casualty products must conform to a number of relevant state laws, which often differ across states. These include laws regarding responsibility for, and limits to, damages, which differ across states. For example, some states allow joint and several liability in the recovery of damages, while others might not. Some states limit certain types of damages, such as pain and suffering, while others do not. According to one state, to the extent that property/casualty policies must be written to account for each state’s specific laws, reciprocity across states for approval of these policies will be limited. According to some industry participants, lack of reciprocity and uniformity in product approval processes could lead to inefficiencies that may have negative impacts across the insurance industry. First, when different states conduct product approval in multiple ways, such as through a voluntary compact that some states do not join or by striking individual agreements with other states, regulators may have difficulty achieving an efficient nationwide system of insurance oversight that produces uniform processes and consistently high standards. Uneven levels of protection across states may also mean that consumers may be better protected in some states than others. Second, according to some industry participants, when states have different approaches and practices for product approval, it may be difficult for insurers to achieve product approval in timely, cost-effective ways that enable them to bring new products to market that could serve consumers. Some insurers said that different state processes mean that insurers that file for approval in multiple states will have to produce multiple applications or tailor them to meet state requirements. And third, when regulators and insurers face such challenges, producing an insurance regulatory structure that consistently protects consumers across states becomes difficult to achieve. Lack of reciprocity and uniformity and the inefficiencies that may result from different product approval systems may impact both insurers and consumers in additional ways. Some insurers in our sample specifically suggested that these inefficiencies cost insurers and regulators time and financial resources and may inhibit the introduction of new products that could serve consumers as they seek protection against various risks. In addition, some industry participants noted that lack of uniformity and reciprocity in product approval processes may lead to higher costs for insurers and, in turn, consumers. NAIC and the states have taken steps to improve market conduct regulation, but variations in how states carry out market conduct oversight and in state laws and resources have limited progress toward reciprocity and uniformity. NAIC has established goals aimed at producing common market analysis and examination standards that states can use as the basis for a uniform market conduct program. In addition, NAIC has created guidance to help state regulators better manage and more uniformly approach market conduct oversight. While these efforts have encouraged more standardized practices for market conduct analysis and examination, states vary in how uniformly they use NAIC guidance and tools and in the resources and staff they have available for market conduct regulation. States’ varied use of the NAIC’s market conduct guidance, varying individual state insurance laws, and different levels of resources and staff expertise may also lead to market conduct inefficiencies and uneven consumer protection across states. NAIC has moved market conduct regulation forward by establishing goals and guidance in its 2003 Insurance Regulatory Modernization Action Plan (Modernization Plan), which aimed to improve uniformity of market conduct oversight by state regulators and covered other areas such as producer licensing and product approval. For market conduct oversight, the plan’s goals called for formal and rigorous market analysis across states. NAIC promotes analysis in order to help regulators identify market problems and companies and better protect consumers. According to NAIC officials, data collection and analysis were also intended to equip regulators with information that could help them better target their efforts and resources rather than relying on broader, more expensive exams to identify and respond to issues. The plan also calls for each state to adopt uniform market analysis standards and procedures and integrate market analysis into their overall regulatory functions. NAIC goals were developed, in part, in response to a 2003 GAO report, which recommended that NAIC and the states identify a common set of standards for a uniform market conduct program for use by all states, including procedures for market analysis and coordinating market conduct exams. We also recommended that NAIC and states establish a mechanism to encourage state legislatures to adopt and implement the minimum standards. While NAIC and state regulators have taken some steps to improve market conduct regulation, variations among state standards still exist. NAIC has taken several steps to improve market conduct regulation that include updating examination guidance and developing new data collection and analysis tools to promote uniformity. In addition, NAIC created a list of fundamental skills and resources state regulators should have for oversight of the insurance industry. NAIC has also sought to improve coordination of enforcement actions across states. However, use and implementation of these tools and guidance have varied across states. According to NAIC officials, NAIC has initiated a number of efforts to improve market conduct regulation with tools and guidance for more standardized examination approaches. NAIC developed market conduct examination standards and procedures in its Market Regulation Handbook (Handbook), published in 2006. For example, NAIC officials told us that the Handbook updated NAIC’s market regulation guidance by combining standards for market analysis and market conduct examinations into one document. Revisions to the Handbook and the tools and guidance it contains were designed to help states move from relying on broad examinations for identifying market conduct issues to using market data and analysis to identify problems and target regulatory responses. In addition, NAIC officials told us that NAIC developed the Market Conduct Uniform Examination Outline in 2002 to promote state uniformity in examination scheduling, pre-examination planning, core examination procedures, and examination reporting. The outline sought to help minimize state variations in market conduct examinations. Among other things, the Outline includes a list of reasons for examinations, such as: extent of an insurer’s market share, findings from other state regulators, a shift in business practices, past history of noncompliance, information collected through regulatory surveys, length of time since the last examination, and new laws enacted since the last examination. According to NAIC officials, states can use the Outline at their discretion and self-certify with NAIC that they are using it, though NAIC does not verify states’ reporting. Self-certification allows NAIC to gauge the extent of compliance with the Outline. To promote a set of strong, uniform standards for market oversight, NAIC also developed guidance in the form of 99 core competency standards, which it considers to be fundamental capabilities and resources that state regulators should have in place for strong market conduct oversight. The body of core competency standards consists of four principle elements. First, departments of insurance should have the authority to analyze, examine, or investigate any entity involved with insurance transactions. Further, departments should have the staff training, resources, and types of examiners needed for market conduct oversight. Second, departments should have the ability to conduct market conduct data collection and analysis and designate appropriate staff leaders responsible for an effective market analysis program. Third, departments ought to have a means of moving from market analysis to regulatory action by developing a spectrum of regulatory tools that are available for use in response to market conduct examinations, investigations, and consumer complaints. The fourth principle element of the core competency standards aims to promote interstate collaboration in regulatory action through participation in NAIC working groups and databases and information sharing among regulatory staff designated as contacts on multistate enforcement actions. According to NAIC officials, in addition to the Handbook and Examination Outline, NAIC has been working since 2005 to develop an accreditation program for market conduct regulation. NAIC developed the financial accreditation program in 1990 to help ensure uniformity of financial solvency regulation by the states. Its proposed market regulation accreditation program seeks to promote a market conduct accreditation process so that states can objectively monitor and oversee the conduct of insurers and protect consumers. Specifically, the program outlines six market conduct accreditation categories, based on the 99 core competency standards: data collection and reporting, including use by states of key NAIC databases such as the Regulatory Information Retrieval System (RIRS), Complaints Database, Market Analysis Review System (MARS), Market Conduct Examination Tracking System, Special Activities Database (SAD), and Market Initiative Tracking System (MITS); market analysis, which includes having appropriate regulatory staff with specific responsibility for data analysis and developing a baseline understanding of insurance markets and issues; market conduct examinations, including having procedural guidelines and standards in place to determine when examinations should be called, and which adhere to the Scheduling, Coordinating, and Communicating chapter of the Market Regulation Handbook; interstate collaboration, including contacts designated by commissioners of insurance for the purpose of interstate communication and collaborative actions; oversight of contractors hired by insurance departments that have the expertise and professional qualifications to perform market conduct and analysis and examinations; and treatment of confidential information, meaning that insurance departments should have the authority to analyze, examine, or investigate entities involved with the business of insurance, as well as protect consumers, enforce a continuum of regulatory responses when needed, and keep records and insurance information confidential. According to NAIC, while it encourages states to use its tools and guidance when developing market conduct oversight programs, use of the Market Regulation Handbook, including the Examination Outline is not mandatory and states have discretion regarding the extent to which the tools are implemented. Some of the state regulators in our sample noted that they used the Handbook to the extent its provisions were consistent with their state laws and market conduct priorities. For example, officials from one state department of insurance told us that they had instructed market conduct staff to use the Handbook as a foundation to develop its current revisions to market conduct procedures, but only if the Handbook’s guidelines did not conflict with the state’s statutes or regulatory priorities. According to NAIC officials, as of March 2009, 41 states and the District of Columbia had self-certified compliance with the Examination Outline; NAIC does not validate states’ certification and has no immediate plans to do so. While NAIC has fully developed core competency standards and drafted a market regulation accreditation program, use of the standards has been varied, and the accreditation program, as of March 2009, was still a proposal that had not yet been implemented. For example, as of that date, NAIC officials told us that 29 states and jurisdictions reported through an NAIC survey that they met the general core competency standards. In addition, according to NAIC, the accreditation program’s core competency standards require state insurance regulators to follow the Scheduling, Coordinating, and Communicating chapter of the Market Regulation Handbook with respect to planning market conduct examinations, but not other key market conduct guidance found in the Handbook. For example, the market conduct accreditation program’s core competency standards do not require adherence to guidance such as how to conduct property/casualty, life and annuity, health, and multi-state examinations. Without requirements to follow other key parts of the Handbook as part of the market conduct accreditation program, it is unclear to what extent the program will help ensure strong market conduct practices and encourage uniform examination procedures across states. NAIC has created market conduct data collection and analysis tools, but efforts to collect market conduct data from insurers face challenges. To improve data collection, NAIC developed the Market Conduct Annual Statement (MCAS), which began first as a pilot project in 2002 and became permanent in 2004. MCAS is a data collection instrument designed to help state insurance regulators better understand insurers’ conduct in the marketplace, identify problem areas, and use information to target market conduct responses and examinations. The information collected includes, for example, annual data on how long it takes insurance companies to settle claims and the separate numbers of complaints insurers received from state departments of insurance and directly from consumers. Additional examples of MCAS data elements collected on different lines of insurance can be found in appendix III. According to NAIC, once the data are collected, state regulators use it to establish baseline measures for targeting their market conduct efforts and prioritizing companies for regulatory attention. State regulators may use deviation from the measures as criteria for following up with an insurer on their conduct or undertaking an examination. According to NAIC officials, as of March 2009, 29 states were collecting data using MCAS. Other states used their own processes for tracking market conduct and identifying issues that required regulatory attention. For example, several state regulators reported using the information collected through MCAS to perform baseline analysis on insurance companies writing business in their states, identify insurer conduct that might require their attention or an examination, or monitor individual company and industry trends. One state regulator noted that since it had begun participating in MCAS, its market conduct staff no longer depended exclusively on premium volume or basic complaint activity to monitor an insurance company’s market conduct. Further, according to the regulator, MCAS helps the department to identify potential problems and their sources, thereby allowing department staff to target their responses rather than perform a comprehensive review. According to NAIC, while MCAS provides NAIC and insurance regulators with detailed market conduct data, greater uniformity and participation have been limited by disagreement among insurers and consumer groups over the types of data that MCAS collects and the extent to which the data should be made public. According to several insurers and industry officials, public access to MCAS is problematic because they (1) consider MCAS data to be proprietary and fear their competitive position might be compromised if other insurers had access to it, (2) believe MCAS data could be misunderstood by the general public and used to make poor insurance decisions, and (3) feel MCAS data could be misused by trial attorneys to try to initiate class action suits against insurance companies. However, some consumer groups mentioned that MCAS data would better serve consumers if it contained more detailed insurer information than the summary level data currently collected. In their view, more detailed data would help consumers better compare insurance companies and their products and would help regulators better protect consumers by using data to identify and react to market conduct issues. These disagreements about the data types, uses, and access have slowed consensus and cooperation on the use of uniform data to improve market conduct and have limited progress toward strong, uniform oversight. While the data access issues had not been resolved as of March 2009, NAIC officials noted that they will begin aggregating market conduct data in 2009 for eventual use by participating states during their oversight activities. NAIC also plans to continually refine data collection, aggregation, and analysis processes, and it plans to work with states and the insurance industry on existing and future MCAS concerns. In addition, some industry participants in our sample noted that it was difficult to achieve greater market conduct uniformity when not all states participate in standardized improvement efforts like the MCAS. NAIC has suggested that without greater state participation in this tool, some regulators will have to rely more on exams, which can be costly and duplicative across states, than on market analysis to monitor the marketplace and protect consumers. In addition to MCAS, NAIC created Level 1 Analysis in 2005, which is an automated set of questions regulators can use to help evaluate individual companies. NAIC then built on Level 1 Analysis by developing Level 2 Analysis that offers regulators additional sources of possible information on insurers’ market conduct. Further, NAIC developed MARS in 2005, which stores Level 1 Analysis questions and insurers’ answers. The MARS database can be accessed by states and helps regulators identify and respond to market conduct issues by seeing analysis performed by other states. NAIC sees these developments as standardized, uniform tools that state regulators can use to improve access to key regulatory information, identify insurance issues, and respond with targeted actions. NAIC has also taken steps to improve coordination of enforcement actions across states, but uniformity here is uncertain. NAIC formed the Market Analysis Working Group (MAWG) in 2003 to help states coordinate insurance regulatory actions. Specifically, the group functions to facilitate interstate communication on identified or potential market conduct issues, share information of common concern regarding insurers’ activity, and promote a targeted regulatory response from a spectrum of possible actions. Some states in our sample said that through MAWG, several multistate collaborative actions had been initiated in both market conduct examinations and settlements. In addition, one state insurance department noted that MAWG’s quarterly meetings and the open lines of communication among states enabled it and other states to bring problem companies to the attention of the group for possible coordinated regulatory action. According to NAIC officials, because state regulators have a forum to discuss regulatory issues and actions, MAWG has also facilitated a more consistent range of regulatory responses to similar multistate concerns. In addition to MAWG, NAIC developed MITS in 2006, which enables states to track and share regulatory actions by entering these actions into an electronic database. For example, several states told us that they log their market conduct activities into MITS so that other states can learn about their issues and actions. Further, NAIC developed RIRS, a database that dates back to the 1980s but was automated in 1995, specifically allows states to see the adjudicated regulatory actions of other states. NAIC officials noted that the RIRS system helps them monitor the insurance market, hone in on issues they consider significant, and more efficiently respond to those issues. Individual laws passed by state legislatures and implemented by state insurance departments govern market conduct regulation and consumer protection activities. While such differences allow for the regulatory flexibility needed in a diverse national marketplace for insurance, different laws, regulations, and practices may make greater uniformity among states difficult to achieve. To help increase the uniformity of state laws regarding market conduct activities, in 2004 NAIC and NCOIL worked to jointly develop a market conduct model law that created market conduct standards to promote uniformity across states. However, according to NAIC, differences among states played a significant role in limiting support for the model law, and ultimately only one state adopted it. Market analysis and examination uniformity are also limited by variations among states regarding their respective resources. According to NAIC, states that have greater budgetary and staff resources may be able to undertake more detailed data collection and analysis and respond using NAIC’s market conduct tools and guidance to a greater degree than states that have fewer resources. In addition to budgetary and resource differences, states may also vary in the levels of expertise their staff possess for conducting the data collection and analysis state regulators may use to identify and respond to market conduct issues. States with fewer resources and less expertise may be less able to analyze and use market conduct information as part of their regulatory oversight. However, some states in our sample noted that state regulators may contract with outside experts to fulfill functions or areas of expertise they lack in-house, and although it may be costly, insurers generally bear these expenses. Limited uniformity in the use of NAIC’s market conduct tools and in state laws and resources—and the resulting limits on reciprocity among states—may create inefficiencies for insurers and regulators and lead to uneven levels of consumer protection across states. For example, in the absence of uniform examination procedures and criteria for selecting insurance companies to examine, states implement their respective market conduct processes based on state laws, insurance department priorities, and established practices. Varying examination processes across states may mean that insurers may be subjected to multiple and sometimes simultaneous exams by regulators in the states where they operate. An insurer’s compliance with examinations by different state regulators may lead to increased costs to the company, which in turn may be passed on to the consumer in the form of higher insurance rates. When state regulators do not rely on other states’ market conduct oversight, they may have to conduct more regulatory activities on their own. In addition, state regulators’ varying use of the NAIC’s market conduct data collection instruments, examination tools, and guidance may lead to varying regulatory efforts in overseeing insurance companies in respective states. According to some insurers and consumer groups, insurance companies located in states that have stronger market conduct surveillance standards may be subjected to more scrutiny than those in states with less stringent market conduct standards. Varying levels of market conduct oversight may lead to uneven levels of consumer protection so that consumers may have stronger protections in some states than others. Reciprocity and uniformity in the insurance areas of producer licensing, product approval, and market conduct regulation can result in benefits to regulators, insurers, and consumers. For regulators, reciprocity and uniformity can mean standardized processes and standards that can lead to efficient and effective ways of working with insurers to license agents and brokers, review and approve products for sale in the marketplace, and protect consumers from harmful actors and products. Reciprocity for insurers means faster, more efficient ways of introducing and gaining approval for new insurance products and assurance that regulatory processes will be similar across states, potentially helping insurers keep their overall insurance product costs lower. An insurance system with greater reciprocity and uniformity may also limit inefficiencies that could contribute to higher product costs for insurers and consumers and may provide for more coordinated, even consumer protection across states. To the extent that reciprocity and uniformity are limited across states, benefits to regulators, insurers, and consumers may also be limited. NAIC has made progress on reciprocity and uniformity in key areas of producer licensing, product approval, and market conduct, but this progress has not come quickly and in some cases has been limited. Efforts to achieve greater reciprocity in producer licensing began following passage of GLBA in 1999, and as of March 2009, 47 states had been certified as reciprocal. However, several key states, including California, Florida, and New York, were still not considered reciprocal for non-resident producer license applicants, and it appears that many states still impose separate or additional requirements on resident producers. In addition, as of March 2009, only 17 states were conducting criminal background checks on applicants, resulting in uneven consumer protections across states. Finally, we recommended in 2000 that NAIC and state insurance regulators develop mechanisms for routinely obtaining regulatory data from financial services regulators. Limited progress has been made in this area, and we continue to believe that the development of such a system is an important element of effective consumer protection efforts. NAIC furthered efforts to improve reciprocity and uniformity in the approval of insurance products when it implemented SERFF in 1998, a system that has automated the process of applying for the approval of insurance products. As of March 2009, SERFF was used in 52 states and jurisdictions, and approximately 85 percent of all filings were achieved through the system. NAIC and the states also advanced product approval reciprocity and uniformity with the creation of its Modernization Plan in 2003. In addition, NAIC and the states created the Interstate Insurance Product Regulation Commission, a single product approval entity that approves products that are recognized among the compacting states. Nonetheless, it appears that many states are still imposing their own approval practices and requirements on insurers, which limit both reciprocity and uniformity. In addition, the compact is limited to 34 states and jurisdictions and only certain types of insurance products. NAIC and the states have also made efforts to improve market conduct regulation, which were items noted in NAIC’s 2003 Modernization Plan and addressed in a 2003 GAO report. Our report recommended that NAIC and states take steps to adopt and implement minimum standards for market conduct oversight that would include all states. We still believe improvements are needed to address remaining market conduct regulatory differences among states. Such actions could include ensuring that all appropriate guidance—for example, from the Market Regulation Handbook—be included as part of the accreditation process, and ensuring that states meet uniform minimum standards in a timely manner. NAIC and state insurance regulators have completed some improvements, such as revising market conduct guidance and creating a market conduct working group that has helped increase uniformity across states. However, other important efforts, such as the collection and use of standardized market conduct data and implementation of the core market conduct competency standards, were still incomplete as of March 2009. As a result, uniformity across states may be limited, and consumer protections may vary. Regulators have faced, and will continue to face, a number of challenges to increasing reciprocity and uniformity in these areas. For example, insurance regulatory improvement may require increasing uniformity of state laws that govern licensing requirements and product approval, which in turn requires cooperation from state legislatures. NAIC and state insurance regulators’ work with state legislatures has occurred over a number of years, and some regulators told us that cooperation had been difficult to achieve in some areas. In particular, according to NAIC, despite efforts in many more states, regulators in only 17 states have obtained statutory authority to conduct full criminal background checks with fingerprinting. Another challenge is the differing levels of resources and expertise among state insurance departments, which means that some states may have the resources and staff for certain efforts, while others may not. Further, NAIC’s operations generally require consensus among a large number of regulators, and NAIC seeks to obtain and consider the input of industry participants and consumer advocates. Obtaining a wide range of views may create a more thoughtful, balanced regulatory approach, but working through the different goals and priorities of all of these entities can result in lengthy processes and long implementation periods for regulatory improvements. Continued progress in a timely manner, however, is critical to improving the efficiency and effectiveness of the insurance regulatory system. We also recognize that the costs and benefits of further increases in reciprocity and uniformity must be considered. Regulators, insurers, and consumers may not benefit if achieving uniformity occurred by simply lowering standards across states. At the same time, it may not be feasible to achieve reciprocity and uniformity across states by meeting the highest standard achieved by any one state. In addition, it is not clear that full reciprocity in some areas would be realistically achievable. For example, as we have said, uniformity and reciprocity for the approval of property/casualty products would require significant changes in state laws, including a wide body of tort law. States have tailored those laws to best protect their residents, and since many are not exclusive to insurance, such large-scale changes may be unlikely. As the insurance regulatory system is part of the broader financial regulatory system, it should support the goals that the federal government has for the entire financial regulatory system and should be part of discussions for potential regulatory reforms. In a recent report, we suggested a number of goals for the U.S. financial regulatory system. Reciprocity and uniformity within the regulation of insurance could support at least four of these goals. First, a regulatory system where changes can be made uniformly across states may be able to more readily adapt to innovations and changes in the insurance market. Second, greater reciprocity and uniformity could lead to a more efficient system for regulators through the reduction of overlapping activities, as well as for insurers by reducing the number of different requirements they must meet across states. Third, greater uniformity across states could provide more consistent protection for consumers purchasing similar products and services. Fourth, greater uniformity could also provide more consistent financial oversight for similar institutions, products, and services. In that report we also noted that, given the difficulties to harmonize insurance regulation across states, Congress could explore the advantages and disadvantages of providing a federal charter option for insurance and creating a federal insurance regulatory entity. The establishment of a federal insurance charter could help alleviate some of these challenges, but such an approach could also have unintended consequences for state regulatory bodies and for insurance firms as well. However, any consideration of a change to the current insurance regulatory structure, including a possible federal insurance charter, should involve appropriate cost-benefit analysis. In order to improve how state insurance regulators identify insurance license applicants with criminal backgrounds and protect consumers, Congress, as it explores the advantages and disadvantages of a change to the federal role in the regulation of insurance, should explore ways to ensure that all state insurance regulators can conduct nationwide criminal background checks as part of their producer licensing and consumer protection functions. To continue progress achieved through NAIC’s electronic and automated product filing processes, we also recommend that NAIC and state regulators work with the insurance industry to further identify differences in the ways state regulators review and approve filings received through SERFF, and take any necessary steps, where appropriate, to improve consistency in their product approval processes. We provided a draft of this report to NAIC. The Chief Operating Officer and Chief Legal Officer of NAIC provided written comments, which are reprinted in appendix II. In commenting on a draft of this report, NAIC’s Chief Operating Officer and Chief Legal Officer agreed with our recommendation. NAIC also made some general comments about the benefits of state-based regulation. In the area of producer licensing, NAIC noted that while we acknowledged that 47 states had been certified as reciprocal, we also described reciprocity as limited. As we discuss in the report, while NAIC has made progress in some areas, we continue to view overall progress on uniformity and reciprocity as limited. NAIC and the states have made progress with reciprocity, but the certification process does not include a review of whether states are also complying with the URLS, which added some standards that were not included in PLMA but which NAIC believed were important for meaningful uniformity and reciprocity as noted in the report. For example, the certification process does not require criminal background checks. Also related to this issue, NAIC noted that one procedural issue has been a significant impediment, FBI administrative standards related to fingerprinting. We have added this new information to the report. NAIC also noted a number of other efforts that they have taken in the area of producer licensing including the State Producer Licensing Database. In the product approval area, NAIC commented on a variety of issues and provided some updated data on the Interstate Compact and activities of Interstate Insurance Product Regulation Commission (IIPRC), both of which are discussed in the report. Moreover, they noted that they have continued to make progress in adopting uniform standards in certain property lines and that more companies are registering. The letter also provides NAIC’s views on the flexibility and improvements afforded states regarding product approval, an issue raised during the course of our work and discussed in the report. NAIC also discusses the Compact approval process and transparency. As we noted in the report, consumer groups we spoke with expressed concern that the product approval process was not more transparent. NAIC commented about suitability and consumer protection issues associated with the Compact by noting that state insurance regulators retain the authority to protect consumers and the Compact preserves consumers’ rights to pursue legal remedies not specifically directed to the content of the product. Finally, with respect to market conduct regulation, NAIC highlighted its efforts in this area and noted that it continues to pursue standardized data collection practices, the development of a Market Regulation Accreditation Program, and participation by all states in MCAS data collection by 2010. In addition, NAIC provided technical comments on the report, which we incorporated, as appropriate. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from its date of issue. At that time we will send copies of this report to interested congressional committees, the Chief Executive Officer of the National Association of Insurance Commissioners, and others. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. As Congress has considered options to help ensure efficient and effective regulation of the insurance market, policymakers have had a number of questions about the success of recent efforts and the challenges that remain. To address these questions, in each of the regulatory areas of producer licensing, product approval, and market conduct regulation, we have been asked to assess (1) the progress NAIC and state regulators have made to increase reciprocity and uniformity, (2) the factors that have challenged efforts to achieve greater reciprocity and uniformity, and (3) the potential effects on the insurance industry and consumers if greater progress is not made. To assess the progress, challenges, and potential effects on the insurance industry and consumers related to reciprocity and uniformity in producer licensing, product approval, and market conduct, we interviewed officials from state insurance departments, the National Association of Insurance Commissioners (NAIC), the National Conference of Insurance Legislators (NCOIL), primary insurance companies, insurance associations, and consumer advocacy groups. We met with insurance regulators from nine states—Alabama, California, Florida, Georgia, Illinois, New York, Texas, Pennsylvania, and Ohio. We selected this sample of states due to the states’ geographic diversity and respective premium volumes, which ranged from small to large. The four insurers we met with provided property and casualty insurance coverage and life and health insurance to consumers. We also met with several industry associations representing insurance companies covering property and casualty and life and health insurance lines across states. The consumer advocacy groups with whom we met represented both individual state consumers and consumers nationwide. We also reviewed congressional testimony from knowledgeable industry participants, several of whom we interviewed for this study. Further, we examined regulatory documents such as NAIC’s Insurance Regulatory Modernization Action Plan (Modernization Plan) and NAIC’s standards and guidelines concerning producer licensing, product approval, and market conduct regulation. Finally, we reviewed our previous reports and testimonies and Congressional Research Service reviews. To examine the progress, challenges, and potential effects on the insurance industry and consumers related to producer licensing reciprocity and uniformity, we spoke with NAIC officials, NCOIL officials, state insurance regulators, insurance companies, insurance associations, and consumer advocacy groups. To obtain information on the producer licensing goals that NAIC established, we reviewed NAIC’s 2003 Modernization Plan and other NAIC documents. We also reviewed our previous reports and testimonies that called for improvements to producer licensing. To document the states that NAIC has certified as reciprocal for producer licensing, and those states that have statutory authority to perform criminal background checks with fingerprinting, we relied on NAIC data. To examine progress in making product approval more efficient, the barriers to further reciprocity and uniformity and the potential effects if more progress is not made, we spoke with NAIC, NCOIL, states, industry representatives, and consumer advocacy groups. We reviewed NAIC’s Modernization Plan and other NAIC documentation to determine NAIC’s product approval goals. Previous GAO studies provided recommendations geared toward improvement of product approval regulation. To gather information on the states that have joined the Interstate Compact, we relied on NAIC and Interstate Insurance Product Regulation Commission (IIPRC) data. To examine the steps NAIC has taken to improve market conduct reciprocity and uniformity, and the potential impact on the insurance industry if greater progress does not occur, we spoke with NAIC, state insurance regulators, insurance companies, insurance industry associations, and consumer advocates. Documentation from NAIC such as the Modernization Plan and the Market Regulation Handbook provided us with NAIC’s market conduct goals and guidance to promote uniform market conduct standards. To obtain information on the specific data elements collected through the MCAS, we relied on NAIC documentation on elements collected for individual and group life, fixed and variable annuities, private passenger auto, and homeowners insurance products. We conducted our work from February 2008 through April 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Examples of Market Conduct Annual Statement Data Elements Private Passenger Auto Insurance Data Elements Number Of Claims Open At The Beginning Of The Period Number Of Claims Opened During The Period Number Of Claims Closed During The Period, With Payment Number Of Claims Closed During The Period, Without Payment Median Days To Final Payment Number Of Claims Settled Within 0-30 Days Number Of Claims Settled Within 31-60 Days Number Of Claims Settled Within 61-90 Days Number Of Claims Settled Within 91-180 Days Number Of Claims Settled Within 181-365 Days Number Of Claims Settled Beyond 365 Days Median Days To Date Of Report Number Of Suits Open At Beginning Of The Period Number Of Autos Which Have Policies In-Force At The End Of The Period Number Of Policies In-Force At The End Of The Period Number Of New Business Policies Written During The Period Dollar Amount Of Direct Premium Written During The Period Number Of Non-Renewals During The Period Number Of Cancellations That Occur 60 Days Or More After Effective Date, Excluding Those For Either Non-Pay Or At The Insured’s Request Number Of Cancellations That Occur In The First 59 Days After Effective Date, Excluding Those For Either Non-Pay Or At The Insured’s Request Number Of Claims Open At The Beginning Of The Period Number Of Claims Opened During The Period Number Of Claims Closed During The Period, With Payment Number Of Claims Closed During The Period, Without Payment Median Days To Final Payment Number Of Claims Settled Within 0-30 Days Number Of Claims Settled Within 31-60 Days Number Of Claims Settled Within 61-90 Days Number Of Claims Settled Within 91-180 Days Number Of Claims Settled Within 181-365 Days Number Of Claims Settled Beyond 365 Days Median Days To Date Of Report Number Of Suits Open At Beginning Of The Period Private Passenger Auto Insurance Data Elements Number Of Suits Closed During The Period Number Of Suits Open At End Of Period Number Of Dwellings Which Have Policies In-Force At The End Of The Period Number Of Policies In-Force At The End Of The Period Number Of New Business Policies Written During The Period Dollar Amount Of Direct Premium Written During The Period Number Of Non-Renewals During The Period Number Of Cancellations That Occur 60 Days Or More After Effective Date, Excluding Those For Either Non-Pay Or At The Insured’s Request Number Of Cancellations That Occur In The First 59 Days After Effective Date, Excluding Those For Either Non-Pay Or At The Insured’s Request Fixed and Variable Annuities Data Elements Number Of New Replacement Contracts Applied For During The Period Number Of New Replacement Contracts Issued During The Period Internal Replacement Indicator (Yes/No) Loan Purchase Indicator (Yes/No) 1035 Rollover Indicator (Yes/No) Replacement Register Indicator (Yes/No) Number Of Contracts Surrendered During The Period Number Of New 1035 Exchanges Coming Into The Company During The Period Number Of New Contracts Issued During The Period Number Of Contracts In Force At The End Of The Period Dollar Amount Of Annuity Considerations During The Period Number Of Complaints Received Directly From Consumers Number Of Complaints Received Directly From The Corresponding Department Of Insurance Complaint Register Indicator (Yes/No) Individual and Group Life Product Data Elements Number Of New Replacement Policies Applied For During The Period Number Of New Replacement Policies Issued During The Period Internal Replacement Indicator (Yes/No) Surrender Indicator (Yes/No) Loan Purchase Indicator (Yes/No) 1035 Rollover Indicator (Yes/No) Replacement Register Indicator (Yes/No) Number Of In Force Policies Containing Policy Loans With An Outstanding Balance Over 25 Percent Of The Maximum Loan Value As Of December 31, 20XX Partial Surrenders Indicator (Yes/No) Private Passenger Auto Insurance Data Elements Number Of New 1035 Exchanges Coming Into The Company During The Period Number Of New Policies Issued During The Period Number Of Policies In Force At The End Of The Period Dollar Amount Of Direct Premium During The Period Dollar Amount Of Insurance Issued During The Period (Face Amount) Dollar Amount Of Insurance In Force At The End Of The Period (Face Amount) Number Of Complaints Received Directly From Consumers Number Of Complaints Received Directly From The Corresponding Department Of Insurance Complaint Register Indicator (Yes/No) In addition to the contact named above, Patrick Ward (Assistant Director), Farah Angersola, Emily Chalmers, Barry Kirby, Marc Molino, Steve Ruszczyk, and Jennifer Schwartz made key contributions to this report. Better Information Sharing Among Financial Services Regulators Could Improve Protections for Consumers. GAO-04-882R. Washington, D.C.: June 29, 2004. Insurance Regulation: Common Standards and Improved Coordination Needed to Strengthen Market Regulation. GAO-03-433. Washington, D.C.: September 30, 2003. Preliminary Views on States’ Oversight of Insurers’ Market Behavior. GAO-03-738T. Washington, D.C.: May 6, 2003. State Insurance Regulation: Efforts to Streamline Key Licensing and Approval Processes Face Challenges. GAO-02-842T. Washington, D.C.: June 18, 2002. Regulatory Initiatives of the National Association of Insurance Commissioners. GAO-01-885R. Washington, D.C.: July 6, 2001. Financial Services Regulators: Better Information Sharing Could Reduce Fraud. GAO-01-478T. Washington, D.C.: March 6, 2001. Insurance Regulation: Scandal Highlights Need for Strengthened Regulatory Oversight. GGD-00-198. Washington, D.C.: September 19, 2000. Insurance Regulation: Scandal Highlights Need for Strengthened Regulatory Oversight. T-GGD-00-209. Washington, D.C.: September 19, 2000.
Because the insurance market is a vital part of the U.S. economy, Congress and others are concerned about limitations to reciprocity and uniformity, regulatory inefficiency, higher insurance costs, and uneven consumer protection. GAO was asked to review the areas of (1) producer licensing, (2) product approval, and (3) market conduct regulation in terms of progress by NAIC and state regulators to increase reciprocity and uniformity, the factors affecting this progress, and the potential impacts if greater progress is not made. GAO analyzed federal laws and regulatory documents, assessed NAIC efforts, and interviewed industry officials. Reciprocity of producer licensing among states has improved, but consumer protection and other issues present challenges to uniformity and full reciprocity. Congress' passage of the Gramm-Leach-Bliley Act (GLBA) in 1999, NAIC's Producer Licensing Model Act (PLMA) of 2000, and uniform licensing standards (2002) have helped improve reciprocity and uniformity. However, NAIC officials noted that as of March 2009, only 17 states were performing full criminal history checks using fingerprinting, and some states that do such checks have been unwilling to reciprocate with states that do not. In addition, some insurance regulators in our sample noted that regulators do not have a systematic way to access disciplinary records of other financial regulators. Without full checks on applicants, states may less effectively protect consumers. Licensing standards, including how state regulators define lines of insurance, also vary across states, further hindering efforts to create reciprocity in agent licensing. These differences may result in inefficiencies that raise costs for insurers and consumers. State regulators' processes to approve insurance products have become more efficient, but barriers exist to greater reciprocity and uniformity. NAIC and state regulators have improved product approval filings by creating the System for Electronic Rate and Form Filing (SERFF) in 1998, which, according to some industry participants, has simplified filings and reduced filing errors. However, SERFF does not address differences in regulators' review and approval processes. In addition, an Interstate Compact was created in 2006 to facilitate approval of certain life, annuity, disability income, and long-term care products, which are accepted across participating states. As of March 2009, 34 states participated in the Compact. However, the Compact leaves some decisions on approval up to the individual states, and several key states have not joined because they feel their processes and protections are superior to the Compact's. Moreover, differences in state laws are likely to limit reciprocity in the approval of property/casualty insurance products. To the extent these areas lack reciprocity and uniformity, some industry participants noted that there may be inefficiencies that slow the introduction of new products and raise costs for insurers and consumers. NAIC and the states have taken steps to improve reciprocity and uniformity of market conduct regulation, but variation across states has limited progress. For example, NAIC noted that in 2006 it developed uniform guidance, and in 2008 created core competency standards, which are intended to be part of an accreditation process for market conduct regulation. NAIC noted that the accreditation plan has not been finalized, and the standards do not include adherence to all NAIC market conduct guidance. In addition, NAIC in 2002 developed the Market Conduct Annual Statement (MCAS) to promote uniform data collection and better target exams. However, industry participants have several concerns about the MCAS and NAIC noted that fewer than half of insurance regulators use it for data collection. NAIC has also created a working group to coordinate enforcement actions. While better communication and coordination appears to have resulted, according to some states in our sample, the effect on uniformity of market conduct regulation is uncertain. Lack of uniformity and reciprocity may lead to inefficiencies, higher insurance costs, and uneven consumer protection across states.
TRICARE beneficiaries used the program’s pharmacy benefit to fill almost 134 million outpatient prescriptions in fiscal year 2012. Through its acquisition process, DOD contracts with a pharmacy benefit manager— currently Express Scripts—to provide access to a retail pharmacy network and operate a mail order pharmacy for beneficiaries, and to provide administrative services. Under TRICARE, beneficiaries have three primary health plan options in which they may participate: (1) a managed care option called TRICARE Prime, (2) a preferred-provider option called TRICARE Extra, and (3) a fee-for-service option called TRICARE Standard. TRICARE beneficiaries may obtain medical care through a direct-care system of military treatment facilities or a purchased-care system consisting of network and non-network private sector primary and specialty care providers, and hospitals. In addition, TRICARE’s pharmacy benefit— offered under all TRICARE health plan options—provides beneficiaries with three options for obtaining prescription drugs: from military treatment facility pharmacies, from network and non-network retail pharmacies, and through the TRICARE mail-order pharmacy. TRICARE’s pharmacy benefit has a three-tier copayment structure based on whether a drug is included in DOD’s formulary and the type of pharmacy where the prescription is filled. (See table 1.) DOD’s formulary includes a list of drugs that all military treatment facilities must provide, and a list of drugs that military treatment facilities may elect to provide on the basis of the types of services offered at that facility (e.g., cancer drugs at facilities that provide cancer treatment).as “non-formulary” on the basis of its evaluation of their cost and clinical effectiveness. Non-formulary drugs are available to beneficiaries at a higher cost, unless the provider can establish medical necessity. See Pub. L. No. 110-181, § 703, 122 Stat. 3, 188 (codified at 10 U.S.C. § 1074g(f)). This act provides that with respect to any prescriptions filled on or after January 28, 2008, the TRICARE retail pharmacy program is to be treated as an element of DOD for purposes of procurement of drugs by federal agencies under 38 U.S.C. § 8126 to ensure that drugs paid for by DOD that are dispensed to TRICARE beneficiaries at retail pharmacies are subject to federal pricing arrangements. manufacturers are required to refund to DOD the difference between the federal pricing arrangements and the retail price paid for prescriptions filled dating back to the NDAA’s enactment on January 28, 2008. As of July 31, 2013, according to DOD, its total estimated savings from fiscal year 2009 through fiscal year 2013 were about $6.53 billion as a result of these refunds. DOD’s TRICARE Management Activity is responsible for overseeing the TRICARE program, including the pharmacy benefit. Within this office, the Pharmaceutical Operations Directorate (hereafter referred to as the program office) is responsible for managing the pharmacy benefit (including the contract to provide pharmacy services), and the Acquisition Management and Support Directorate (hereafter referred to as the contracting office) is responsible for managing all acquisitions for the TRICARE Management Activity. The two offices together manage the acquisition process for the pharmacy services contract. (See fig. 2.) The program office and the contracting office provide the clinical expertise and acquisition knowledge, respectively, for the acquisition planning, evaluation of proposals, and award of the pharmacy services contract. The acquisition process for DOD’s pharmacy services contract includes three main phases: (1) acquisition planning, (2) RFP, and (3) award. Acquisition planning. In the acquisition planning phase, the program office, led by the program manager, is primarily responsible for defining TRICARE’s contract requirements—the work to be performed by the contractor—and developing a plan to meet those requirements. The program office also receives guidance and assistance from the contracting office in the development and preparation of key acquisition documents and in the market research process. The market research process can involve the development and use of several information-gathering tools, including requests for information (RFI), which are publicly released documents that allow the government to obtain feedback from industry on various acquisition elements such as the terms and conditions of the contract. RFIs are also a means by which the government can identify potential offerors and determine whether the industry can meet its needs. In addition, we have previously reported that sound acquisition planning includes an assessment of lessons learned to identify improvements. Towards the end of this phase, officials in the program and contracting offices work together to revise and refine key acquisition planning documents. RFP. In the RFP phase, the contracting officer—the official in the contracting office who has the authority to enter into, administer, modify, and terminate contracts—issues the RFP, the proposals. Award. In the award phase, the program and contracting offices are responsible for evaluating proposals and awarding a contract to the offeror representing the best value to the government based on a combination of technical and cost factors. To monitor the contractor’s performance under the contract after award, the contracting officer officially designates a program office official as the contracting officer’s representative (COR), who acts as the liaison between the contracting officer and the contractor and is responsible for the day-to-day monitoring of contractor activities to ensure that the services are delivered in accordance with the contract’s performance standards. The draft monitoring plan for the upcoming pharmacy services contract includes 30 standards—related to timeliness of claims processing, retail network access, and beneficiary satisfaction, among other things—against which the contractor’s performance will be measured. RFPs include a description of the contract requirements, the anticipated terms and conditions that will be contained in the contract, the required information that the prospective offerors must include in their proposal, and the factors that will be used to evaluate proposals. DOD has department-wide acquisition training and experience requirements for all officials who award and administer DOD contracts, including the pharmacy services contract, as required by law. Training is primarily provided through the Defense Acquisition University, and is designed to provide a foundation of acquisition knowledge, but is not targeted to specific contracts or contract types. In addition, all CORs must meet training and experience requirements specified in DOD’s Standard for Certification of Contracting Officer’s Representatives (COR) for Service Acquisitions issued in March 2010. See appendix I for more information on the certification standards for and experience of officials who award and administer the pharmacy services contract. In September 2010, DOD issued guidance to help improve defense acquisition through its Better Buying Power Initiative. DOD’s Better Buying Power Initiative encompasses a set of acquisition principles designed to achieve greater efficiencies through affordability, cost control, elimination of unproductive processes and bureaucracy, and promotion of competition; it provides guidance to acquisition officials on how to implement these principles. The principles are also designed to provide incentives to DOD contractors for productivity and innovation in industry and government. DOD used market research to align the requirements for the upcoming pharmacy services contract with industry best practices and promote competition. DOD also identified changes to the requirements for the upcoming and current contracts in response to changes in legislation, efforts to improve service delivery, and contractor performance. DOD solicited information from industry during its acquisition planning for the upcoming pharmacy services contract through the required market research process, including issuing RFIs and a draft RFP for industry comment, to identify changes to requirements for its pharmacy services contract. Specifically, DOD used market research to align the requirements for the upcoming contract with industry best practices and promote competition. Align contract requirements with industry best practices. DOD issued five RFIs from 2010 through 2012 related to the upcoming contract. RFIs are one of several market research Although DOD is not methods available to federal agencies. required to use them, RFIs are considered a best practice for service acquisitions in the federal government.provided DOD with the opportunity to assess the capability of potential offerors to provide services that DOD may incorporate in the upcoming pharmacy services contract. In many of the RFIs, The RFIs DOD asked questions about specific market trends so that it could determine if changes were needed to the upcoming contract requirements to help align them with industry best practices. For example, DOD issued one RFI in November 2010 that asked about establishing a mechanism that would allow for centralized distribution of specialty pharmaceuticals and preserve DOD’s federal pricing arrangements. Specialty pharmaceuticals—high- cost injectable, infused, oral, or inhaled drugs that are generally more complex to distribute, administer, and monitor than traditional drugs—are becoming a growing cost driver for pharmacy services. According to DOD officials, the RFI responses received from industry generally reinforced their view that the RFP should define any specialty pharmacy owned or sub-contracted by the contractor as a DOD specialty mail-order outlet, which would subject it to the same federal pricing arrangements as the mail- order pharmacy. Promote competition. DOD has also used the RFI process to obtain information on promoting competition. DOD recognized that a limited number of potential offerors may have the capability to handle the pharmacy services contract given the recent consolidation in the pharmacy benefit management market and the large size of the TRICARE beneficiary population. DOD contracting officials told us that, in part because of the department’s Better Buying Power Initiative to improve acquisition practices, they have a strong focus on maintaining a competitive contracting environment for the pharmacy services contract, thereby increasing the use of market research early in the acquisition planning process. For example, DOD’s December 2011 RFI asked for industry perspectives on the length of the contract period. DOD was interested in learning whether a longer contract period would promote competition. DOD officials told us that the responses they received confirmed that potential offerors would prefer a longer contract period because it would allow a non-incumbent more time to recover any capital investment made as part of implementing the contract. The RFP for the upcoming contract includes a contract period of 1 base year and 7 option years. DOD also used the RFI process to confirm that there were a sufficient number of potential offerors to ensure full and open competition for the pharmacy services contract. DOD officials told us that they found there were at least six potential offerors, which gave them confidence that there would be adequate competition. Since the start of the current pharmacy services contract in 2009, DOD has identified changes to the contract requirements in response to legislative changes to the pharmacy benefit, efforts to improve service delivery to beneficiaries, and improvements identified through monitoring of the current contractor’s performance. In each instance, DOD officials needed to determine whether to make the change for the upcoming contract, or whether to make the change via a modification to the current contract. According to DOD officials, there were over 300 modifications to the current pharmacy services contract; 23 of these were changes to the work to be done by the contractor. DOD officials told us that it is not possible to build a level of flexibility into the contract to accommodate or anticipate all potential changes (and thus avoid modifications to the contract), because doing so would make it difficult for offerors to determine pricing in their proposals. Legislative changes to the pharmacy benefit. Legislative changes have been one key driver of DOD’s revisions to its pharmacy services contract requirements. For example, one legislative change required DOD to implement the TRICARE Young Adult program, which resulted in DOD adding a requirement for the contractor to extend pharmacy services to eligible military dependents through the age of 26. This change was made as a modification under the current contract. Another legislative change that necessitated changes to the contract requirements was the increase in beneficiary copayments for drugs obtained through mail-order or retail pharmacies, enacted as part of the NDAA for fiscal year 2013, which DOD changed through a modification to the current contract. A third legislative change to the pharmacy benefit was the mail-order pilot for maintenance drugs for TRICARE for Life beneficiaries. DOD officials incorporated this change in the requirements for the upcoming pharmacy services contract, as outlined in the RFP. Efforts to improve service delivery. DOD has also updated contract requirements to improve service delivery to beneficiaries under the pharmacy services contract. DOD initiated a modification to the current contract to require the contractor to provide online coordination of benefits for beneficiaries with health care coverage from multiple insurers. Specifically, the contractor is required to ensure that pharmacy data systems include information on government and other health insurance coverage to facilitate coverage and payment determinations. According to DOD officials, this change is consistent with the updated national telecommunication standard from the National Council for Prescription Drug Programs, which provides a uniform format for electronic claims processing. According to DOD officials, this change to the contract requirements eliminates the need for beneficiaries to file paper claims when TRICARE is the secondary payer, simplifying the process for beneficiaries and reducing costs for DOD. Another modification to the current contract to improve service delivery was to require the contractor to provide vaccines through its network of retail pharmacies. According to DOD officials, this modification was made to allow beneficiaries to access vaccines through every possible venue, driven by the 2010 H1N1 influenza pandemic. Contractor performance. DOD officials told us that improvements identified through the monitoring of contractor performance have also led to changes in contract requirements. Through the CORs’ monitoring of the contractor’s performance against the standards specified in the contract, the CORs may determine that a particular standard is not helping to achieve the performance desired or is unnecessarily restrictive. For example, DOD officials told us that in the current contract, they had a three- tiered standard for paper claims processing (e.g., 95 percent of paper claims processed within 10 days, 99 percent within 20 days, and 100 percent within 30 days). Through monitoring the contractor’s performance, the CORs determined that there was a negligible difference between the middle and high tiers, and holding the contractor to this performance standard was not beneficial. The requirements for the upcoming contract as described in the RFP only include two tiers—95 percent of claims processed within 14 calendar days, and 100 percent within 28 calendar days. When making changes to contract requirements, DOD officials told us they try to ensure that the requirements are not overly prescriptive, but rather outcome-oriented and performance-based. For example, DOD officials told us that they allowed the pharmacy and managed care support contractors to innovate and apply industry best practices regarding coverage and coordination of home infusion services. According to DOD officials, the contract requirements regarding home infusion are focused on the desired outcome—providing coordination of care for beneficiaries needing these services with the physician as the key decision maker—and DOD officials facilitated meetings between the pharmacy contractor and managed care support contractors to determine the details of how to provide the services. This approach is consistent with DOD’s Better Buying Power principles that emphasize the importance of well-defined contract requirements and acquisition officials’ understanding of cost-performance trade-offs. This approach also addresses a concern we have previously identified regarding overly prescriptive contract requirements in TRICARE contracts; specifically, in our previous work on the managed care support contracts, we reported that DOD’s prescriptive requirements limited innovation and competition among contractors. Since retail pharmacy services were carved out about 10 years ago, DOD has not conducted an assessment of the appropriateness of its current pharmacy services contract structure that includes an evaluation of the costs and benefits of alternative structures. Alternative structures can include a carve-in of all pharmacy services into the managed care support contracts, or a structure that carves in a component of pharmacy services, such as the mail-order pharmacy, while maintaining a carve-out structure for other components. DOD officials told us they believe that DOD’s current pharmacy services contract structure continues to be appropriate, as it affords more control over pharmacy data and allows for more detailed data analyses and increased transparency about costs. DOD’s continued use of a carve-out contract structure for pharmacy services is consistent with findings from research and perspectives we heard from industry group officials—that larger employers are more likely to carve out pharmacy services to better leverage the economies of scale and cost savings a stand-alone pharmacy benefit manager can achieve.These arrangements may also provide more detailed information on drug utilization that can be helpful in managing drug formularies and their associated costs. In its December 2007 report, the Task Force on the Future of Military Health Care recommended examining an alternative structure for the pharmacy services contract.health care system, the task force reviewed DOD’s pharmacy benefit program, recommending that DOD pilot a carve-in pharmacy contract structure within one of the TRICARE regions with a goal of achieving better financial and health outcomes as a result of having more integrated pharmacy and medical services. The managed care support contractors we spoke with expressed similar concerns. However, DOD did not agree with the task force’s recommendation. In its response, DOD assessed the In addition to other aspects of DOD’s benefits of the current structure and affirmed the department’s commitment to this structure. Potential cost savings. In its response to the task force report, DOD did not concur with the recommendation to pilot a carve-in pharmacy contract structure, in part because of the cost savings achieved through the carve-out. Specifically, DOD stated that the carve-out arrangement is compatible with accessing federal pricing arrangements and other discounts available for direct purchases. DOD stated in its response that, under a carve-in arrangement, even on a pilot basis, it would lose access to discounts available for direct purchases, including some portion of the $400 million in annual discounts available for drugs dispensed at retail pharmacies under the NDAA for fiscal year 2008. DOD officials told us that this loss would result from the managed care support contractor being the purchaser of the drugs, rather than DOD. DOD also stated that it would possibly lose access to the volume discounts obtained for drugs purchased for the mail-order pharmacy and military treatment facility pharmacies under a carve-in structure. DOD officials told us that these disadvantages of a carve-in structure remain the same today. Additionally, during this review, DOD noted that dividing the TRICARE beneficiary population among contractors under a carve-in would dilute the leverage a single pharmacy benefit manager would have in the market. For example, DOD would lose economies of scale for claims processing services provided by the pharmacy contractor, resulting in increased costs. However, research studies have found, and officials from TRICARE’s managed care support contractors told us, that a contract structure with integrated medical and pharmacy services could result in cost savings for DOD. For example, one recent study found that employers with carve-in health plans had 3.8 percent lower total medical care costs compared to employers with pharmacy services carved out. The researchers attributed the cost difference, in part, to increased coordination of care for the carve-in plans, leading to fewer adverse events for patients, resulting in fewer inpatient admissions; they reported that plans with a carve-out arrangement had 7 percent higher inpatient admissions. Similarly, representatives from one managed care support contractor we spoke with stated they thought they could achieve similar cost savings to what DOD currently has through its federal pricing arrangements by using integrated medical and pharmacy services as a means of reducing costs in a carve-in arrangement. Being able to analyze integrated, in-house medical and pharmacy data may help health plans to lower costs by identifying high-cost beneficiaries, including those with chronic conditions such as asthma and diabetes, and targeting timely and cost-effective interventions for this population. Potential health benefits from data integration. In recommending that DOD pilot a carve-in pharmacy contract structure, one of the task force’s goals was to improve health outcomes as a result of integrated medical and pharmacy services. DOD noted in its task force response that it could achieve this goal under the current carve-out contract structure by including requirements in the pharmacy services contract and managed care support contracts requiring data sharing between the contractors. While the current contract requires the pharmacy and managed care support contractors to exchange data for care coordination, current TRICARE managed care support contractors told us there continue to be challenges with data sharing to facilitate disease management. Contractors expressed similar concerns about sharing medical and pharmacy data as part of our previous work related to DOD’s managed care support contracts. Additionally, during this review, officials from one of the managed care support contractors told us that they continue to find it challenging to generate data that provide a holistic view of beneficiaries when medical and pharmacy data remain separate. Representatives from another managed care support contractor told us that their disease management staff faced challenges in analyzing pharmacy data for groups of patients they were managing. They also told us that if these staff had more complete and real-time access to pharmacy data, as they would under a carve-in structure, they could be more proactive in assisting DOD’s efforts to identify patients who should participate in disease management programs. Additionally, researchers have found that disease management interventions may be challenging to conduct in a carve-out arrangement due to the lack of fully integrated medical and pharmacy data. According to DOD officials, any changes to the current contract structure would result in less efficient and inconsistent pharmacy service delivery across the three TRICARE regions, as officials observed when the retail pharmacy benefit was part of the managed care support contracts. One of DOD’s reasons for the initial carve-out was a concern that pharmacy services were not being consistently implemented across the TRICARE regions. For example, DOD officials told us that two health plans in different TRICARE regions were able to have different preferred drugs within the same therapeutic class, and while both drugs may be included on DOD’s formulary, beneficiaries in different parts of the country were not being consistently provided with the same drug. In addition, according to DOD, beneficiaries were dissatisfied with a benefit that was not portable across TRICARE regions—specifically, retail pharmacy networks differed by region, so beneficiaries who moved from one TRICARE region to another would have to change retail pharmacy networks. With one national pharmacy services contract, DOD officials said they can ensure that the formulary is implemented consistently and that beneficiaries have access to the same retail pharmacy network across the TRICARE regions. Since the current pharmacy services contract structure was implemented almost 10 years ago, DOD has not incorporated an assessment of the contract structure that includes an evaluation of alternative structures into its acquisition planning activities. DOD officials told us that they consider their task force response to be an assessment of the current contract structure. While the response included a justification for the current structure, it did not include an evaluation of the potential costs and benefits of alternative structures, such as carving in all or part of the pharmacy benefit. In addition, the acquisition plan for the upcoming contract described two alternative carve-out configurations (separate contracts for the mail-order and retail pharmacies and a government- owned facility to house drugs for the mail-order pharmacy contract). However, the plan similarly did not include an evaluation of the potential costs and benefits of these options, nor did the plan include an evaluation of any carve-in alternatives. DOD officials told us there are no current plans to conduct such an evaluation as part of the department’s acquisition planning efforts. DOD officials also told us that they continue to believe the current structure is appropriate because the current carve-out structure provides high beneficiary satisfaction and is achieving DOD’s original objectives, namely consistent provision of benefits, access to federal pricing arrangements, and transparency of pharmacy utilization and cost data. Further, officials told us that the current carve-out structure is more efficient to administer with one pharmacy services contractor than the previous carve-in structure that involved multiple managed care support contractors. While DOD officials believe the current structure is appropriate, there have been significant changes in the pharmacy benefit management market in the past decade. These changes include mergers, as well as companies offering new services that may change the services and options available to DOD. For example, representatives from one managed care support contractor we spoke with told us that they can offer different services to DOD today than they were able to offer when pharmacy services were part of the managed care support contracts. While the contractor had previously sub-contracted with a separate pharmacy benefit manager to provide pharmacy services under its managed care support contract, this contractor’s parent company now provides in-house pharmacy benefit management services for its commercial clients. Additionally, according to the parent company of another managed care support contractor, its recent decision to bring pharmacy benefit management services in-house will enhance its ability to manage total health care costs and improve health outcomes for clients who carve in pharmacy services. As we have previously reported, sound acquisition planning includes an assessment of lessons learned to identify improvements. The time necessary for such activities can vary greatly, depending on the complexity of the contract. We have also reported that a comparative evaluation of the costs and benefits of alternatives can provide an evidence-based rationale for why an agency has chosen a particular alternative (such as a decision to maintain or alter the current pharmacy services contract structure). We have reported that such an evaluation would consider possible alternatives and should not be developed solely to support a predetermined solution. With each new pharmacy services contract, DOD officials have the opportunity to conduct acquisition planning activities that help determine whether the contract—and its current structure—continues to meet the department’s needs, including providing the best value and services to the government and beneficiaries. These activities can include changing requirements as necessary, learning about current market trends, and incorporating new information and lessons learned. Acquisition planning can also incorporate an assessment of the pharmacy services contract structure that includes an evaluation of the potential costs and benefits of alternative contract structures. Incorporating such an evaluation into the acquisition planning for each new pharmacy services contract can provide DOD with an evidence-based rationale for why maintaining or changing the current structure is warranted. Without such an evaluation, DOD cannot effectively demonstrate to Congress and stakeholders that it has chosen the most appropriate contract structure, in terms of costs to the government and services for beneficiaries. To provide decision makers with more complete information on the continued appropriateness of the current pharmacy services contract structure, and to ensure the best value and services to the government and beneficiaries, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense (Health Affairs) to take the following two actions: conduct an evaluation of the potential costs and benefits of alternative contract structures for the TRICARE pharmacy services contract; and incorporate such an evaluation into acquisition planning. We provided a draft of this report to DOD for comment. DOD generally concurred with our findings and conclusions and concurred with our recommendations. DOD also commented that based on past experience with alternative contract structures, it is confident that the current contract structure is the most cost efficient and beneficial. In response to our recommendation that DOD conduct an evaluation of the potential costs and benefits of alternative contract structures for the TRICARE pharmacy services contract, DOD commented that there is a lack of data to support inferences that a carve-in arrangement would result in cost savings to the government, and noted that the full development of two separate RFPs would be necessary to provide a valid cost comparison. While detailed cost estimates can be a useful tool for DOD, they are not the only means of evaluating alternative structures for the pharmacy services contract. For example, as we noted in our report, DOD has previously used RFIs to obtain information from industry to inform its decisions about the pharmacy services contract, and this process also may be helpful in identifying costs and benefits of alternative contract structures. In response to our recommendation that DOD incorporate such an evaluation into acquisition planning, DOD commented that it included an evaluation of its past contract experience into acquisition planning for the upcoming pharmacy services contract. However, as noted in our report, the acquisition plan for the upcoming contract did not include an evaluation of the potential costs and benefits of alternative contract structures, and DOD did not directly address how it would include such an evaluation in its acquisition planning activities. We continue to emphasize the importance of having an evidence-based rationale for why maintaining or changing the current structure is warranted. With each new pharmacy services contract, DOD officials have the opportunity to determine whether the contract continues to meet the department’s needs, including providing the best value to the government and services to beneficiaries. In addition, DOD stated in its comments that our report did not address its direct-care system and noted that carving pharmacy services back into the managed care support contracts would fragment the pharmacy benefit and undermine its goal of integrating all pharmacy points of service. Our review was focused on DOD’s purchased-care system for providing pharmacy services, although we did provide context about the direct-care system as appropriate. Furthermore, we did not recommend any specific structure for DOD’s pharmacy services contract, but rather that DOD evaluate the costs and benefits of alternative structures such that it can have an evidence-based rationale for its decisions. DOD’s comments are reprinted in appendix II. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Assistant Secretary of Defense (Health Affairs); and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Department of Defense (DOD) officials who award and administer the TRICARE pharmacy services contract are required to meet relevant certification standards applicable to all DOD acquisition officials and, according to DOD officials, some of these officials also have pharmacy- specific experience. The training and related education and experience requirements are tailored to different levels of authority and, due to the size and complexity of the pharmacy services contract, the contracting officer and program manager for the pharmacy services contract are required to be certified at the highest levels, which require the most training and experience. In addition, all contracting officer’s representatives (COR) must meet specific training and experience requirements based on the complexity and risk of the contracts they will be working with, and the two CORs for the pharmacy services contract are also required to meet the highest COR certification level. For example, the CORs for the pharmacy services contract must complete at least 16 hours of COR-specific continuing education every 3 years, which is twice the amount required for low-risk, fixed-price contracts. DOD’s department-wide acquisition training is primarily provided through the Training is designed to provide a Defense Acquisition University.foundation of acquisition knowledge but is not targeted to specific contracts or contract types. Beyond DOD’s required training, the contracting officer, program manager, and CORs also have specialized experience in pharmacy and related issues. See table 2 for the specific certification standards for and pharmacy-specific experience of the officials responsible for awarding or administering the pharmacy services contract. In addition to the contact named above, Janina Austin, Assistant Director; Lisa Motley; Laurie Pachter; Julie T. Stewart; Malissa G. Winograd; and William T. Woods made key contributions to this report.
DOD offers health care coverage--medical and pharmacy services--to eligible beneficiaries through its TRICARE program. DOD contracts with managed care support contractors to provide medical services, and separately with a pharmacy benefit manager to provide pharmacy services that include the TRICARE mail-order pharmacy and access to a retail pharmacy network. This is referred to as a carve-out contract structure. DOD's current pharmacy contract ends in the fall of 2014. DOD has been preparing for its upcoming contract through acquisition planning, which included identifying any needed changes to contract requirements. Senate Report 112-173, which accompanied a version of the NDAA for fiscal year 2013, mandated that GAO review DOD's health care contracts. For this report, GAO examined: (1) how DOD identified changes needed, if any, to requirements for its upcoming pharmacy services contract; and (2) what, if any, assessment DOD has done of the appropriateness of its current contract structure. GAO reviewed DOD acquisition planning documents and federal regulations, and interviewed officials from DOD and its pharmacy services contractor. The Department of Defense (DOD) used various methods to identify needed changes to requirements for its upcoming pharmacy services contract. During acquisition planning for the upcoming TRICARE pharmacy services contract, DOD solicited feedback from industry through its market research process to align the contract requirements with industry best practices and promote competition. For example, DOD issued requests for information (RFI) in which DOD asked questions about specific market trends, such as ensuring that certain categories of drugs are distributed through the most cost-effective mechanism. DOD also issued an RFI to obtain information on promoting competition, asking industry for opinions on the length of the contract period. DOD officials told us that responses indicated that potential offerors would prefer a longer contract period because it would allow a new contractor more time to recover any capital investment made in implementing the contract. The request for proposals for the upcoming contract, issued in June 2013, included a contract period of 1 base year and 7 option years. DOD also identified changes to contract requirements in response to legislative changes to the TRICARE pharmacy benefit. For example, the National Defense Authorization Act (NDAA) for fiscal year 2013 required DOD to implement a mail-order pilot for maintenance drugs for beneficiaries who are also enrolled in Medicare Part B. DOD officials incorporated this change in the requirements for the upcoming pharmacy services contract. DOD has not conducted an assessment of the appropriateness of its current pharmacy services contract structure that includes an evaluation of the costs and benefits of alternative structures. Alternative structures can include incorporating all pharmacy services into the managed care support contracts--a carve-in structure--or a structure that incorporates certain components of DOD's pharmacy services, such as the mail-order pharmacy, into the managed care support contracts while maintaining a separate contract for other components. DOD officials told GAO they believe that DOD's current carve-out contract structure continues to be appropriate, as it affords more control over pharmacy data that allows for detailed data analyses and cost transparency, meets program goals, and has high beneficiary satisfaction. However, there have been significant changes in the pharmacy benefit management market in the past decade, including mergers and companies offering new services that may change the services and options available to DOD. GAO has previously reported that sound acquisition planning includes an assessment of lessons learned to identify improvements. Additionally, GAO has reported that a comparative evaluation of the costs and benefits of alternatives can provide an evidence-based rationale for why an agency has chosen a particular alternative. Without this type of evaluation, DOD cannot effectively demonstrate that it has chosen the most appropriate contract structure in terms of costs to the government and services for beneficiaries. GAO recommends that DOD conduct an evaluation of the potential costs and benefits of alternative structures for the TRICARE pharmacy services contract, and incorporate such an evaluation into acquisition planning. DOD concurred with GAO's recommendations.
FAA’s primary mission is to provide the safest, most efficient aerospace system in the world. FAA oversees operating and maintaining this system, known as the NAS, as well as the safety of aircraft and operators. FAA operates and maintains the NAS through the following: a workforce of technicians, air traffic controllers, and other staff who work in airport towers, terminal areas, en-route centers, oceanic air traffic control centers, and other facilities, and the ATC and other supporting systems and infrastructure, including ground-based surveillance radar facilities, communication equipment, automation systems, and the facilities that house and support these systems. Various offices within FAA are responsible for the air traffic control system and its modernization through the NextGen initiative. The ATO, headed by the COO, is responsible for the day-to-day operations and maintenance of the air traffic control system. The NextGen Office, ATO, and Office of Aviation Safety are involved with various aspects of NextGen’s management and implementation. The Office of Airports is responsible for all programs related to airport safety and inspections, standards for airport design, construction, and operation. In this role, the Office of Airports supports the implementation of NextGen. These offices report to the Deputy Administrator, who also has the designation Chief NextGen Officer (see fig. 1). FAA receives funds annually through congressional appropriations into four accounts: The operations account funds, among other things, the operation and maintenance of the air traffic control system. The facilities and equipment account funds technological improvements to the air traffic control system, including NextGen. The research, engineering, and development account funds research on issues related to aviation safety and NextGen systems. The Airport Improvement Program account provides grants for airport planning and development. See figure 2 for percentage of fiscal year 2013 congressional appropriations by account. Congress appropriates funding from the Airport and Airway Trust Fund, which receives revenues from a series of excise taxes paid by users of the national airspace system, as well as from general revenues. The Trust Fund provides nearly all of the funding for FAA’s capital investments in the airport and airway system. Revenue sources for the trust fund include passenger ticket taxes, segment taxes, air cargo taxes, and taxes paid by both commercial and general aviation aircraft. The trust fund also provides a substantial portion of funding for operations—for example 80 percent of FAA’s $15.9-billion funding in fiscal year 2014. The remaining amount was appropriated from general revenues. Whereas FAA operates, maintains, and regulates the air traffic control system in the United States, in countries such as the United Kingdom, Germany, and Canada, their air navigation service providers (ANSP) are commercialized and handle the day-to-day operations of the air traffic control systems, while the governments regulate these activities. These ANSPs employ the workforce, maintain the infrastructure, and undertake modernization efforts. International ANSPs vary in the extent of government ownership and commercialization, with some as state-owned corporations, some as public-private partnerships, and some as private corporations. According to two recent international analyses comparing ANSPs from different countries on a range of performance measures including productivity, efficiency, and cost-effectiveness, FAA operates one of the most efficient ATC systems. According to a 2012 comparison of air traffic management performance between FAA and the combined 37 ANSPs of Europe, the United States had a similar arrival punctuality rate with Europe for a similar amount of continental airspace. Another international comparison, completed in 2013, of performance data from FAA and 22 global ANSPs showed similar results, with FAA ranking second in productivity. However, it is difficult to compare performance, as air spaces are different. For example, FAA’s ATC system controls about 60 percent more flights than Europe, its airspace is nearly twice as dense as that of the European ANSPs, and it has 23 percent fewer air traffic controllers. In addition, Europe has to coordinate among 37 ANSPs, while the United States has one. Although FAA is recognized for safety and relative efficiency, its attempts to modernize the ATC system have been less successful. We have chronicled the difficulties FAA has faced completing what it envisioned initially in 1981 as a 10-year program to upgrade and replace NAS facilities and equipment. For example, in August 1995, we found substantial cost and schedule overruns. To address these difficulties, in the past, Congress gave FAA acquisition and human capital flexibilities to improve the agency’s management of the modernization program. Specifically, in 1995, Congress directed FAA to implement new acquisition and personnel management systems and exempted the agency from certain federal acquisition and personnel laws and rules. In June 2005, we found that FAA had largely implemented these flexibilities. However, modernization difficulties persisted, and Congress directed FAA in 2003 to conceptualize and plan NextGen. NextGen was envisioned at that time as a major redesign of the air transportation system to increase efficiency, enhance safety, and reduce flight delays. NextGen is planned to incorporate precision satellite navigation and surveillance; digital, networked communications; an integrated weather system; and more. This complex undertaking requires acquiring new integrated air traffic control systems; developing new flight procedures, standards, and regulations; and creating and maintaining new supporting infrastructure. This transformation is designed to dramatically change the roles and responsibilities of both air traffic controllers and pilots and change the way they interface with their systems. The involvement of airlines and other aviation stakeholders is also critical, since full implementation of NextGen will necessitate airlines and others to invest in new avionics and other technologies to take advantage of NextGen technologies. See figure 3 for the expected benefits from NextGen implementation as depicted through improvements to the phases of flight. In addition, to address stakeholder and congressional concerns over NextGen management practices and the pace of modernization efforts over the last decade, FAA has reorganized several times. These changes included: In 2003, FAA hired a COO and in 2004 created the ATO to transform the air traffic control system into a more performance- based organization and improve the modernization effort. In 2011, FAA moved the office responsible for coordinating NextGen activities—the NextGen Office—out of the ATO and made it report directly to the Deputy Administrator to increase NextGen’s visibility within and outside of the agency and create a direct line of authority for NextGen. In 2012, FAA created the Program Management Office (PMO), within the ATO, to improve the oversight of ATO’s acquisition and implementation efforts, including those for NextGen. At the direction of the FAA Modernization and Reform Act of 2012, FAA created the Chief NextGen Officer position, currently held by the Deputy FAA Administrator, who reports directly to the FAA Administrator. However, challenges continue to persist, as we found in April 2013, August 2013, and February 2014. Specifically, we found that while FAA had made some progress in implementing the NextGen modernization program, FAA continued to experience challenges, including in the following areas: Human capital activities: Improving and sustaining NextGen leadership and preparing FAA’s workforce. Program management: Prioritizing projects to achieve some near-and mid-term benefits and managing NextGen interdependencies. Coordination with industry stakeholders: Gaining greater involvement from industry stakeholders in FAA’s initiatives and equipping aircraft with NextGen technologies. Transitioning to NextGen: Balancing the needs of the current ATC system and NextGen and consolidating and realigning FAA’s facilities. In these reports, we made six recommendations to FAA regarding the improvement of budget planning, performance-based navigation implementation, and stakeholder coordination and communication. DOT concurred with these recommendations, but as of August 2014, had not yet implemented them. Stakeholders’ views on FAA’s capability to operate an efficient ATC system generally align with the two international analyses described previously. Almost three-quarters (53) of the 72 stakeholders who provided a rating rated FAA as moderately to very able to operate an efficient ATC system. Four stakeholders did not rate FAA on this issue. (See table 1 for the stakeholders’ ratings.) In addition, during our interviews, over three times as many of the stakeholders specifically mentioned that the ATC system is generally efficient (37) than those who said the system is not (12). Fourteen stakeholders specifically said that FAA operates the most efficient system in the world. Notwithstanding this generally positive assessment, stakeholders raised areas where FAA could improve. For example, 29 stakeholders indicated that FAA does not handle irregular air traffic operations very well, such as those caused by inclement weather. Stakeholders’ views regarding NextGen implementation also reflect our past findings on FAA’s difficulties in implementing the initiative. Eighty percent (56) of the 70 stakeholders who provided a rating rated FAA as marginally to moderately able to implement NextGen. Six stakeholders did not rate FAA. (See table 1.) In addition, during our interviews, more than three times as many of the stakeholders (43) said that FAA’s overall implementation of NextGen was not going well than those who said it was going well (13), and 30 specifically mentioned that FAA was not doing well managing technology programs in general and NextGen acquisitions and contracts in particular. In our interviews with FAA senior management, officials acknowledged that stakeholders’ complaints about NextGen were not new. They also said that the agency is taking steps to improve implementation, that NextGen is now on track and that the agency is starting to focus more on using these technologies to improve flight efficiencies and reduce flight time and fuel use, steps and a focus that should result in stakeholders realizing tangible benefits in the future. Almost all (75) of the 76 stakeholders identified challenges that they stated FAA faces in improving ATC operations and overcoming difficulties in implementing NextGen. (See app. IV for a list and description of the challenges for FAA that stakeholders identified during our interviews.) The six challenges stakeholders noted most often are discussed below. These challenges are long-standing, as we have issued reports on them as far back as the 1980s, and more recently in the past few years. Automatic Dependent Surveillance- Broadcast (ADS-B) ADS-B, a key NextGen program, is a technology that enables aircraft to continually broadcast flight data—such as position, air speed, and altitude, among other types of information—to air traffic controllers and other aircraft. ADS-B Out is the ability to transmit ADS- signals from the ground and other aircraft, process those signals, and display traffic and weather information to flight crews. The Federal Aviation Administration required that airplanes be equipped with ADS-B Out by January 1, 2020. On the other hand, aircraft operators are not required to install ADS-B In, but may choose to do so, as is the case for most NextGen equipment. Consistent with what we have found in the past, stakeholders and FAA officials told us that ensuring that aircraft are equipped with avionics to take advantage of NextGen technologies is a challenge. Full implementation of NextGen will necessitate that system users make significant investment in new technologies. FAA estimated in 2013 that, of the estimated $18.1-billion overall implementation cost that is to be shared between airlines and FAA, airlines would need to invest $6.6 billion on avionics to realize the full potential benefits from NextGen capabilities. Forty-six of the stakeholders we interviewed raised this issue as a challenge for FAA, such as in convincing users to equip their aircraft with avionics to take advantage of NextGen technologies. Stakeholders explained that users have been reluctant to equip their aircraft due to the expense and uncertainty over FAA’s ability to meet timelines for deploying NextGen technologies. In April 2013, we found that airlines and other stakeholders had expressed skepticism about the progress FAA had made to date in implementing NextGen technologies, skepticism that, in turn, had affected their confidence about whether benefits would justify these investments. While some stakeholders agreed that equipping aircraft is necessary for successful and continuous modernization, they differed in who bore responsibility for paying for equipage—users or FAA. In August 2013, we noted that the 2012 FAA Modernization and Reform Act required FAA to report on options to encourage equipping aircraft with NextGen technologies and the costs and benefits of each option. FAA officials we interviewed said that they have completed the installation of the ground infrastructure for Automatic Dependent Surveillance-Broadcast (ADS-B) Out and that aviation system users, in turn, must equip their aircraft with ADS-B Out avionics by the FAA’s 2020 equipage deadline. Both the aviation stakeholders and FAA officials we interviewed regard budget uncertainty as a challenge for FAA. Forty-three stakeholders raised budget uncertainty as a difficulty for FAA’s ability to continue operation of an efficient ATC system and/or implementation of NextGen. One factor stakeholders raised as contributing to budget uncertainties is the annual appropriations process. In all but 3 of the last 30 years, Congress has passed “continuing resolutions” to provide funding for agencies to continue operating until agreement is reached on final appropriations. Further, according to the House Transportation and Infrastructure Committee, prior to the FAA Modernization and Reform Act of 2012, FAA had operated under 22 extensions, that provided short-term funding for the agency since the expiration of the 2007 Aviation Authorization legislation. According to some stakeholders, the stops and starts associated with continuing resolutions make it difficult for FAA to carry out long-term planning and strategic development of future technologies and innovation. We found in September 2009 and March 2013 that continuing resolutions can create budget uncertainty for agencies about both when they will receive their final appropriation and what level of funding will ultimately be available. We further found that operating under continuing resolutions can also complicate agency operations and cause inefficiencies, such as leading to repetitive work, limiting agencies’ decision-making options, and making trade-offs more difficult. On the other hand, attempting to mitigate the effects of an unpredictable funding stream is not a new challenge for FAA, or for many other federal agencies that have had to operate in times of an uncertain fiscal environment. Stakeholders also indicated that the current budgetary conditions—the fiscal year 2013 budget sequestration (the across-the-board cancellation of budgetary resources) along with the associated employee furloughs and the October 2013 government shutdown—have made FAA’s funding less predictable. In turn, this can make it difficult for FAA to run a 24/7 operation and maintain the ATC system as part of the transition to NextGen. In March 2014, we detailed the effects of the fiscal year 2013 budget sequestration on federal agencies, including FAA, such as reducing or delaying some public services and disrupting some operations. We found that the DOT took actions to minimize the effects of sequestration on FAA operations by beginning to plan for it during the summer of 2012, focusing on ensuring the safety of the traveling public, according to DOT officials. DOT halted these actions when it was provided with statutory authority to make a one-time transfer of $253 million between budget accounts to address these issues. As a result of this transfer, FAA minimized the number of planned furlough days and restored ATC services and other aviation activities; however, these efforts did not prevent delays from occurring in major metropolitan areas— including New York, Chicago, and Southern California—according to FAA, because fewer controllers were available to manage air traffic. FAA senior management generally agreed with the stakeholders’ perspective that unpredictable budgets make planning and managing the ATC system and NextGen programs difficult and result in delays and inefficiencies. The senior managers did not offer specific solutions; however, they indicated that if FAA received more funding that was available across fiscal years, rather than just for one fiscal year at a time, and had a greater ability to move funds between accounts, FAA would be able to improve its operations and NextGen implementation. Consistent with what we have found in the past, the stakeholders and FAA senior management agree that improving human capital activities is a challenge for FAA. Forty-two stakeholders identified human capital activities as a challenge for FAA in improving the efficiency of the ATC systems and/or implementing NextGen. Among the human capital challenges the stakeholders identified were matching workforce skills with FAA needs for hiring and staffing, insufficient training, and planning for upcoming retirements. FAA senior management also raised human capital challenges during our discussions with them. For example, one senior official acknowledged that providing required training is an element of delivering the full capability of NextGen and is a challenge but that FAA was working to address this challenge. We have also reported on FAA’s workforce training and staffing issues in the past. For example, in August 2013 we found that FAA had been working to address long-standing challenges associated with involving its air traffic controller and technician workforce in developing and implementing NextGen systems, steps that are critical to the successful implementation of NextGen. In addition, we found that during the NextGen transition, FAA would need a sufficient number of skilled controllers who are able to increasingly rely on automation, technicians who are able to properly maintain and certify both existing and NextGen systems, and a sufficient acquisitions workforce to successfully acquire NextGen systems and equipment. Stakeholders identified challenges in implementing new navigation procedures, and we have found similar challenges in previous work. A large percentage of the current U.S. air carrier fleet is equipped to fly using Performance Based Navigation (PBN) procedures, which are precise routes that use the Global Positioning System or glide descent paths (see fig. 4). While 21 stakeholders said the development and implementation of PBN-related procedures was improving or working well, almost twice as many, or 41, of the stakeholders said that this process was not working well or moving too slowly. Even when stakeholders said that there have been some things working well, such as successes like the Greener Skies Over Seattle initiative—a satellite- based navigation arrival procedure intended to save aviation system users more than two million gallons of fuel a year and significantly reduce aircraft exhaust and emissions—they pointed to other areas where implementation is taking too long. In April 2013, we found that FAA continues to face challenges in implementing PBN procedures and in explaining to stakeholders the benefits that accrue from their use. Specifically, FAA is not fully leveraging its ability to streamline the development of PBN procedures and the use of third parties to develop, test, and maintain these flight procedures. Senior FAA officials emphasized that their Optimization of Airspace and Procedures in the Metroplex (OAPM) initiative is yielding good results and pointed to the successful use of PBN procedures not only in Seattle but also in the areas around Houston, North Texas, Washington, D.C., and Denver. Officials also said that implementing PBN is one of their top priorities and is part of an effort to deliver near term-benefits and capabilities to system users by 2016. Officials explained that they are working on PBN, through several metroplex-based initiatives, and all parts of the country will not see PBN benefits at the same time. Consistent with what we have found in previous work, stakeholders told us that FAA needs to deliver benefits of NextGen in the near term. To convince aviation system users to make investments in NextGen equipment, FAA must continue to deliver systems, procedures, and capabilities that demonstrate near-term benefits and returns on users’ investments. Forty stakeholders identified as a challenge FAA’s inability to articulate to the industry what NextGen is and what near-term benefits NextGen is going to provide to users. Similarly, in April 2013, we noted the need for FAA to demonstrate to stakeholders NextGen benefits over the next few years. For example, we found that FAA had made some progress in key operational improvement areas, such as upgrading airborne traffic management to enhance the flow of aircraft in congested airspace, revising standards to enhance airport capacity, and focusing FAA’s PBN efforts at priority OAPM sites with airport operations that have a large effect on the overall efficiency of the NAS. However, we also found that in pursuing these near-term benefits, FAA had to make trade- offs in selecting sites and did not fully integrate implementation of its operational improvement efforts at airports. We concluded that because of the interdependency of the improvements, their limited integration could also limit benefits in the near term. Accordingly, we recommended, among other things, that FAA should proactively identify new PBN procedures for the NAS, based on NextGen goals and targets, and evaluate external requests so that FAA can select appropriate solutions and implement guidelines for ensuring timely inclusion of operational improvements at metroplexes such as OAPMs. DOT concurred with these recommendations and is working to address them. FAA senior managers said they were aware of stakeholders’ desire for near-term benefits and told us that they either have taken or plan to take the following steps to address stakeholders’ concerns. FAA plans to emphasize “high priorities” for users based on recommendations of two FAA advisory committees—the NextGen Advisory Committee (NAC) and the RTCA (once called the Radio Technical Commission for Aeronautics). The high priorities are new multiple runway operational procedures at 7 airports by fiscal year 2015, PBN procedures at 9 metroplexes and an additional two metroplexes by October 2014, surface surveillance at 44 airports by fiscal year 2017, and data communications to provide tower clearance delivery at 57 airports by fiscal year 2016. FAA has identified seven NextGen and NextGen-related programs that will be able to deliver near-term benefits and capabilities by 2016, with no additional requirements for users to equip their aircraft until the January 1, 2020, FAA-required deadline for aircraft to be equipped with ADS-B Out technology. The FAA Administrator has begun holding quarterly briefings on NextGen progress and benefits with airline chief executive officers (CEO); however, senior management noted that the diverse range of interests within the industry, and even between CEOs and operations staff within the same company, can make the communication of NextGen progress and benefits challenging. According to FAA’s Assistant Administrator for NextGen, in October 2014 FAA will release a road map outlining the official timeline of the implementation of its NextGen modernization project that will guide FAA through 2025. Stakeholders and FAA officials agree that a challenge for FAA is to maintain the ATC infrastructure through the transition to NextGen while also consolidating or closing aging facilities. Because NextGen represents a transition from existing ATC systems and facilities to new systems, it necessitates changes to or consolidation of existing facilities. Thirty-seven of 76 stakeholders mentioned that consolidating or closing older air traffic control facilities and the need to maintain older “legacy” systems was a challenge. Stakeholders noted congressional interest in preserving ATC facilities and the associated jobs in their districts as a cause for making it more difficult for FAA to close facilities. FAA officials acknowledged that reducing the “footprint” of the air traffic control infrastructure has been difficult but added that they are working on their first set of facility consolidation recommendations, as required by law, and will have those recommendations ready by the end of 2014. In August 2013, we found that if aging systems and associated facilities were not retired, FAA would miss potential opportunities to reduce its overall maintenance costs at a time when resources needed to maintain both systems and facilities may become scarcer and recommended that FAA develop a strategy for implementing the FAA’s Air Traffic Organization’s (ATO) plans. FAA concurred with this recommendation and is working to develop such a strategy by September 2014. An example of a facility FAA plans to close—a very high frequency omnidirectional radio range (VOR) station—is shown in figure 5. Overall, while stakeholders generally thought the current ATC system was operating at least moderately efficiently under FAA’s leadership, when asked what potential changes, if any, to FAA could improve the performance of ATC operations and NextGen implementation, 64 of the 76 stakeholders we interviewed suggested changes. The six most often suggested changes are discussed below. (See app. V for a list and examples of the changes to FAA that the stakeholders suggested during our interviews with them.) Some of these changes address the six challenges raised by stakeholders previously mentioned, while the rest address other challenges stakeholders identified. Change how FAA Is Funded. The change suggested by the most stakeholders (36 of the 64 stakeholders who suggested a change) was to modify how FAA’s ATC operations and NextGen programs are funded. As discussed earlier, budget uncertainty was raised by stakeholders as a challenge for FAA’s ATC operations, NextGen modernization, or both. While 36 stakeholders said a change to the funding process or source of funding was needed, most focused on the outcome they would like to see, namely a more stable or predictable funding stream. Fewer stakeholders (11) offered specific suggestions on how to achieve this outcome. For example, one stakeholder suggested providing FAA with a top-line budget number and then allowing FAA to determine how to allocate resources based on its priorities. FAA officials suggested changes to the agency’s funding mechanism that could improve FAA’s ability to operate the ATC system and implement NextGen, including allowing FAA the flexibility to use funds for their highest priority areas, increasing the fees for registering aircraft, and authorizing FAA to use multi-year funds. Improve Human Capital Activities. Twenty-four of the 64 stakeholders who suggested a change suggested human capital improvements. Stakeholder suggestions included updating the air traffic controller’s handbook, improving the training air traffic controllers receive on new technologies, and streamlining the hiring process. For example, stakeholders said changes were needed to streamline FAA’s air traffic controller-training programs and to ensure the best applicants are hired, especially as many current controllers begin to retire. In June 2008, we reported on FAA’s efforts to hire and train new controllers, in light of the expected departure, mostly due to retirements, of much of the current air traffic controller workforce of over 15,000 controllers between 2008 and 2017. We also found FAA needed to ensure that technician- and controller-training programs were designed to prepare FAA’s workforce to use NextGen technologies. Regarding updating the air traffic controller’s handbook, a senior FAA official said that stakeholders do not appreciate what changing the handbook involves, such as running safety scenarios and testing new procedures to ensure any changes do not adversely affect safety. More broadly, another senior FAA official said that shifting to NextGen would require a cultural change in how air traffic controllers are trained to respond to traffic. Improve Internal Collaboration. Twenty-four of the 64 stakeholders who suggested a change suggested FAA needs to improve internal collaboration within the organization. Stakeholders said different offices within FAA do not communicate well with one another and that this situation has resulted in difficulties and delays in the roll out of NextGen technologies and procedures. Stakeholder suggestions included improving how FAA’s lines of business work together to implement NextGen. In August 2013, we found that FAA is making progress in ensuring communication on NextGen issues across lines of business, for example, through the NextGen Management Board and biweekly program review meetings. In the same report, we also discussed how designating one leader, such as the Deputy Administrator’s responsibility over NextGen, can improve interagency collaboration and speed decision-making. While external stakeholders raised internal collaboration as an area in need of improvement, FAA senior management said that there are good working relationships between the lines of business responsible for ATC operations and NextGen implementation, especially between the Assistant Administrator of NextGen, the COO of the ATO, and the Associate Administrator of Aviation Safety. Streamline Processes. Twenty-three of the 64 stakeholders suggesting a change suggested that FAA needs to streamline some of its processes. Stakeholder suggestions included streamlining the development and implementation of flight navigation procedures, the certification of new aircraft equipment, and the acquisition of new technology. For example, to streamline its process for certifying new technology, one stakeholder said that FAA should use an approach that recognizes that once a type of equipment, such as an antenna, is found to be safe, every piece of that equipment produced does not have to be personally inspected by FAA. FAA officials said that they are making progress streamlining both the certification of new technology and development of new procedures; however, FAA must ensure that new procedures and technology are evaluated for potential safety and environmental concerns and that community outreach occurs. In April 2013, we found that FAA’s processes and requirements, while keeping the U.S. airspace safe, are also complex and lengthy. This includes the processes for developing PBN and other new flight navigation procedures. In the April 2013 report, we also found that FAA had efforts under way to address some of these issues, such as the Navigation Lean (NAV Lean) initiative, which is focused on streamlining the implementation and amendment processes for all flight procedures, but it will be several years before the impact is known. In June 2014, the Department of Transportation Inspector General’s office found that aviation stakeholders are unlikely to see the full benefits of the NAV Lean initiative, namely a reduction in the time it takes to implement new procedures, until September 2015 or later. In October 2010 and October 2013, we found inefficiencies in the certification and approvals process and variations in FAA’s interpretation of certification standards, and recommended improvements FAA could make to evaluate and track certification and approval processes. In October 2013, we also found that while FAA had developed milestones and deployed a tracking system to monitor each certification-related initiative, FAA had not identified overall performance metrics for these efforts to determine whether they would achieve their intended effects. Ultimately, we concluded that having efficient and consistent certification processes will allow FAA to better use its resources as its workload increases with the implementation of NextGen. Improve Coordination with Industry Stakeholders. Stakeholders acknowledged the improvements FAA has made in involving stakeholders in the planning and implementation of NextGen initiatives, especially through the NextGen Advisory Committee. However, 23 of the 64 stakeholders who suggested a change suggested FAA should do more to encourage participation and communication with industry stakeholders. For example, one stakeholder said that while FAA has improved its collaboration with industry stakeholders, particularly by including a wider range of stakeholders, FAA needs to ensure that stakeholders are involved early in the planning process for NextGen initiatives. FAA officials said that ensuring the appropriate stakeholders are involved in an effort is a challenge, but noted that FAA has on-going efforts to ensure the right stakeholders are involved to avoid some of the earlier difficulties rolling out NextGen programs. Similarly, in April 2013, we found that FAA is making progress in systematically involving industry stakeholders, air traffic controllers, and other key subject matter experts in its initiatives, such as the OAPM initiative. However, we have also recommended areas for improvement, such as developing and implementing guidelines for ensuring timely inclusion of appropriate stakeholders, including airport representatives, in the planning and implementation of NextGen improvement efforts. DOT concurred with these recommended areas for improvement and is taking steps to implement the recommendations. Increase Accountability. Twenty-one of the 64 stakeholders who suggested a change suggested FAA needs to increase accountability. Stakeholder suggestions included that FAA should hold its employees and management accountable for how well they accomplish program and plan goals and for how funds are spent. For example, one stakeholder suggested an annual operating plan could help hold FAA accountable to its performance goals. The need for more accountability at FAA, specifically regarding the implementation of NextGen, cuts across several areas we have previously reported on. In February 2014, we found that complex organizational transformations, such as NextGen, require substantial leadership commitment over a sustained period and that leaders must be empowered to make critical decisions and held accountable for results. In April 2013, we also found that to address accountability issues, FAA has taken steps, such as designating the Deputy Administrator as the Chief NextGen Officer with responsibility for all NextGen activities. In the same report, we also discussed that the use of performance measures would allow stakeholders to hold FAA accountable for results. In light of the ongoing discussion within the aviation industry on new approaches for operating and modernizing the ATC system, we also asked stakeholders about changing the provision of ATC services to improve ATC efficiency and NextGen implementation. These potential changes include moving the provision of ATC services out of FAA into a separate unit or organization and commercializing ATC services as has been done in Canada. Seventy percent of the stakeholders (53 of 76) agreed that separating ATC operations out from FAA was an option, but half of these stakeholders (26) voiced serious reservations or indicated such a change was unlikely to occur. Stakeholders also cited potential benefits of separating air traffic control operations from FAA, including a more predictable funding source; potentially reduced political involvement in ATC operational decisions; faster and less costly modernization of the ATC system; and more efficient day-to-day operations. The remaining stakeholders we interviewed were split between the opinion that a separate ATC system was not a good idea (12) and either not providing an opinion on this question or not answering it (11). See table 2 below for stakeholder responses. Stakeholders also raised several issues that would need to be taken into account before making changes to the provision of ATC services. Further, no stakeholder category was unanimous in either supporting or rejecting the option to change provision of ATC services. Airlines were generally more supportive of separating the ATC system from FAA than labor unions and professional associations. General aviation stakeholders were open to the idea but had reservations about the funding scheme. See table 2 for stakeholder responses to this question by industry category. In addition, FAA officials said that they were not opposed to privatization or commercialization of the ATC system, but they would rather focus on what services FAA should provide and what is the best way to pay for these services. Few stakeholders suggested a specific alternative structure for the provision of ATC services, although some listed potential characteristics of an alternative structure, such as user fees, public-private partnership, and a board of directors composed of system users. One example of a specific alternative structure suggested by a stakeholder was a Consumer Service Corporation with no shareholders, so as to avoid vested interests. Others suggested models similar to NAV CANADA, a non-profit trust with a board of representatives made up of industry and government, financed with user fees, and regulated by government. Both stakeholders and FAA officials said it was important to identify what problem or problems separating ATC services out of FAA is intended to solve, before proceeding with it as a solution. However, if a change were to be made, 65 of the 76 stakeholders suggested actions to take or raised issues or concerns to consider. These issues and concerns include the following: Funding: Forty of these 65 stakeholders said the source of funding for a separate air traffic control system is important to consider. Stakeholders suggested different sources of revenue to support a separated ATC system including user fees and a fuel tax. Several stakeholders suggested the possibility of accessing funds through capital markets as an advantage of a separated ATC system. Because the expectation of a future revenue stream (through user fees, for example) may enable a corporatized or privatized ATC system to access private capital markets (to obtain, for example, a bond issuance), a potential benefit of such a structure could be more reliable financing for multiyear investment projects as well as for operations. Lessons Learned: Thirty-eight of these 65 stakeholders suggested studying what the separation of air traffic services from FAA would look like. Stakeholders suggested looking at how air-navigation service providers function in other countries and trying to learn from their successes and mistakes. For example, one stakeholder said that given the efficiency of the current system, before any changes are made, there needs to be an analysis of how privatization would affect passengers, airlines, the aviation industry, and what improvements, including fewer delays and more capacity, it could offer. In a 2005 review of selected foreign air navigation service providers (ANSP), we also found some lessons learned during commercialization, including being prepared to mitigate the financial effects of an industry downturn; the importance of involving industry stakeholders in efforts to design, acquire, and deploy new technologies; balancing the business needs of an ANSP with smaller communities’ need for air service; and the importance of maintaining appropriate level of staff to carry out safety regulation. Congressional involvement: Twenty-nine of these 65 stakeholders suggested that the extent of Congress’s role in overseeing a separate ATC system must be clarified. For example, stakeholders said Congress’s oversight responsibilities of a separate ATC system and even whether Congress should have oversight of such a system needs to be considered. Regulatory coordination: Twenty-seven of these 65 stakeholders suggested that ensuring coordination between the safety regulator and a separate ATC system should be considered. Stakeholders noted, for example, that ensuring coordination might be more difficult with a separated ATC system than under the current structure. Governance: Twenty-five of these 65 stakeholders suggested that governance of a separated ATC system, such as including system users on an oversight body, needs to be considered. For example, one stakeholder asked who would be on a board of directors and how those individuals would be chosen. Safety: Twenty-four of these 65 stakeholders raised concerns about safety under a separate ATC system. Stakeholders were concerned about several issues, including the effect of a non-governmental operator’s profit motive on safety and whether requiring users to pay a fee to use air traffic control services may disincentivize use of the system. Transition management: Twenty-one of these 65 stakeholders raised concerns about how to transition from the current system to a separate ATC system. Stakeholder concerns included the length and difficulty of such a transition and included questions about what to do with the infrastructure and personnel in the current system, and the modernization efforts already under way. FAA senior management also cited transition management as a potential impediment to moving to a different air traffic control structure. Specifically, an FAA official said there does not appear to be an understanding by those advocating privatization of how to move from a government-operated system to a privatized system given the need to operate the NAS at a high level of safety and efficiency. FAA officials also raised concerns about the length and difficulty of such a transition. For example, one official mentioned that such a transition to a new organization would have to include cultural and personnel changes and could take many years to implement. Another official was concerned that a transition occurring now to a privatized system could negatively affect the implementation of NextGen. In November 2002, we found that successful change management initiatives in large private and public sector organizations can often take at least 5 to 7 years. Access: Nineteen of these 65 stakeholders also raised concerns about access to the NAS under a separate ATC system. For example, stakeholders were concerned about small communities losing access to the ATC system and that the fees charged by a separate ATC system might reduce general aviation’s access to the ATC system. We provided DOT with a draft of this report for its review and comment. DOT provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Transportation, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me on (202) 512-2834 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Our work for this report focused on aviation stakeholders’ perspectives on the performance of the air traffic control (ATC) system and efforts to modernize it. This report examines stakeholder perspectives on: (1) the performance of the current ATC system and its modernization through the NextGen initiative, and any challenges the Federal Aviation Administration (FAA) may face in managing these activities; and (2) potential changes, if any, that could improve the performance of the ATC system, including FAA’s modernization initiative. We were also asked to obtain stakeholders’ perspectives on the safety of the National Airspace System (NAS). However, since nearly all stakeholders we interviewed agreed that the NAS is extremely or very safe, we did not focus on this area in this report. To obtain aviation stakeholders’ perspectives on these issues, we interviewed a non-probability sample of 76 aviation stakeholders. We created an initial list of stakeholders using internal knowledge of the aviation industry. We then added more stakeholders based on interviewee responses to our question on whom else they thought we should speak with. Specifically, we wanted to obtain perspectives from individuals and organizations with direct experience, as users, or knowledge, through research or study, of the current ATC system, modernization efforts, and FAA’s management of the system. As such, we limited our review to U.S.-based companies and airlines and sought the views of individuals and organizations with a stake in the performance of the NAS. We divided stakeholders into the following nine categories: airlines, airports, aviation experts and other relevant organizations, general aviation, labor unions and professional associations, manufacturers and service providers, other federal government agencies (Department of Defense and National Aeronautics and Space Administration (NASA)), passenger and safety groups, and research and development organizations. A list of the individuals and groups we interviewed is in appendix II. We used a semi-structured interview format with both closed- and open-ended questions to obtain aviation stakeholder perspectives on the efficiency of the current ATC, implementation of NextGen, and changes, if any, that could improve the operation of the ATC and implementation of NextGen. Our interview format contained four closed-ended questions with either a five-level scale or a yes/no response. These closed-ended questions, the response categories, and stakeholder responses are either included in the body of the report or in appendix III, as appropriate. The intent of our open-ended questions was to engage the stakeholders in a conversation about the issues they considered most important and relevant. The results of our review are not generalizable to the industry as a whole. Our discussion of the challenges FAA faces, potential changes to FAA, and issues to consider if the ATC system were separated from FAA is based on stakeholder responses to our open-ended questions. As such, the numbers we reported with these items represent those stakeholders that raised a challenge or issue to consider or suggested a change during our interview. When we report that 43 stakeholders raised budget uncertainty as a challenge, this does not necessarily mean that the remaining 33 stakeholders we interviewed disagreed. Rather, it means that those stakeholders did not raise it during the course of our interview. We analyzed the responses to these open-ended questions to identify the main themes raised by stakeholders. To ensure the accuracy of our content analysis, we internally reviewed our coding and reconciled any discrepancies. In discussing stakeholder responses to our open-ended questions, we aggregated their responses and reported on stakeholders’ perspectives in general. Stakeholder responses to the yes-no question: Do you think that separating the functions of safety regulator and ATC service provider into separate units or organizations is an option for the United States?—fell into four general responses, which we describe in this report as yes; maybe; no; and no opinion. Respondents who answered “yes” to this question said that separating the ATC service provider from the safety regulator (FAA) was not only an option, but also a good idea. While these respondents still provided issues to consider, they said that this option should be considered and were generally supportive of it. GAO classified respondents’ answers as “Maybe” for those who answered that this was an option, but generally said either it was not a good idea, it was not feasible in the United States, or that they had very strong reservations about such a change. Respondents who answered “no” generally said that it was a bad idea or would simply not work in the United States. Finally, some respondents indicated that they had “no opinion,” meaning that their organization did not have an official position on whether this change was an option in the United States. We reported on stakeholder responses to this closed-ended question— Do you think that separating the functions of the safety regulator and ATC service provider into separate units or organizations is an option for the United States?—by industry category. For the three industry categories with fewer than four respondents—other federal government agencies, passenger and safety groups, and research and development organizations—we combined these into one category and refer to them in table 2 as Other stakeholders. To obtain FAA senior management views on the preliminary results of our content analysis of stakeholder perspectives, we conducted semi- structured interviews with: Administrator; Deputy Administrator/Chief NextGen Officer; Assistant Administrator for NextGen; Associate Administrator for Aviation Safety; Chief Operating Officer (COO) of the Air Traffic Organization (ATO); and Assistant Administrator for Policy, International Affairs, and Environment. We reviewed GAO reports and other sources of aviation information to provide context to the challenges raised by the stakeholders and their suggested changes to the current structure. We identified reports that had discussed the stakeholder-identified themes, including collaboration with stakeholders, delivery of NextGen capabilities, and Performance-Based Navigation procedures, and FAA leadership in overseeing NextGen implementation. We conducted this performance audit from November 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Stakeholder Airlines for America (A4A) Cargo Airline Association (CAA) National Air Carrier Association (NACA) Regional Airline Association (RAA) Regional Air Cargo Carriers Association (RACCA) United Parcel Service (UPS) Houston Airport System (George Bush Intercontinental Airport, William P. Hobby Airport, and Ellington Airport) Los Angeles World Airports (Los Angeles International Airport, LA/Ontario International Airport, and Van Nuys Airport) Metropolitan Airports Commission (Minneapolis-St. Paul International Airport) Port Authority of New York and New Jersey (John F. Kennedy International Airport, Newark Liberty International Airport, LaGuardia Airport, Stewart International Airport, and Teterboro Airport) Airports Council International—North America (ACI-NA) American Association of Airport Executives (AAAE) Bill Ayer, Chair of the NextGen Advisory Committee (NAC) Air Traffic Control Association (ATCA) Michael Baiada, President and Chief Executive Officer, ATH Group Gary Church, President, Aviation Management Associates Dr. George Donohue, Systems Engineering and Operations Research, George Mason University Michael Dyment, Managing Partner, NEXA Capital Partners, LLC Amr ElSawy, President and Chief Executive Officer, Noblis Inc. Dr. Mark Hansen, Civil and Environmental Engineering, University of California, Berkley Dr. John Hansman, Aeronautics and Astronautics, Massachusetts Institute of Technology Robert Poole, Director of Transportation Policy, Reason Foundation RTCA (formerly known as the Radio Technical Commission for Aeronautics) Dr. Stephen Van Beek, Vice President, ICF International J. Randolph Babbitt, Former Administrator (2009-2011) Russell Chew, Former Chief Operations Officer, Air Traffic Organization (2003- 2007) Richard Day, Former Senior Vice President of Operations, Air Traffic Organization (2008-2010) David Grizzle, Former Chief Operations Officer, Air Traffic Organization (2011- 2013) Aircraft Owners and Pilots Association (AOPA) Helicopter Association International (HAI) National Air Transportation Association (NATA) National Business Aviation Association (NBAA) Air Line Pilots Association (ALPA) Allied Pilots Association (APA) Coalition of Airline Pilots Associations (CAPA) National Air Traffic Controllers Association (NATCA) NetJets Association of Shared Aircraft Pilots (NJASAP) Professional Aviation Safety Specialists (PASS) Southwest Airlines Pilots’ Association (SWAPA) Aerospace Industries Association (AIA) Aircraft Electronics Association (AEA) General Aviation Manufacturers Association (GAMA) United Technologies (UTC) Aerospace Systems Department of Defense (DOD) National Aeronautics and Space Administration (NASA) Travelers United (formerly Consumer Travel Alliance) MITRE Center for Advanced Aviation System Development (CAASD) This rating was not given to the stakeholders as a choice; however, the stakeholders’ answers fell between these categories. Description and examples of challenge cited by stakeholders Due to the expense and uncertainty over FAA’s ability to meet timelines for deploying NextGen technologies users have been reluctant to equip their aircraft Budget uncertainty makes it difficult for FAA to continue operation of an efficient ATC system and/or implement NextGen FAA does not match workforce skills with needs for hiring and staffing, provides insufficient training, and has insufficient planning for upcoming retirements FAA’s development and implementation of PBN-related procedures is not working well or is moving too slowly. FAA must continue to deliver systems, procedures, and capabilities that demonstrate near-term benefits and returns on users’ investments to convince aviation system users to make investments in NextGen equipment. FAA must plan for changes to or consolidation of existing facilities because NextGen represents a transition from existing ATC systems and facilities to new systems. FAA’s offices are stove-piped, do not share information with each other well, or are not horizontally integrated. FAA’s aversion to risk and focus on safety prevents improvements in efficiency and adoption of new technologies and procedures. FAA does not handle ATC operations well when airspace capacity is affected by congestion and disruptions due to, for example, inclement weather and power outages. Congress politicizes FAA’s budget and micromanages FAA operations. FAA’s organizational structure misplaces offices and blurs lines of authority and responsibilities. FAA does not communicate, coordinate, or collaborate well with the aviation industry. FAA does not plan well, such as setting unrealistic deadlines, or its plans lack clarity and precision. FAA’s leadership and political appointees lack the right professional background and experience. FAA does not operate airport surface operations well to accommodate increased air capacity or maintain surface infrastructure well. FAA’s policies and procedures are not up-to-date or lack clarity. FAA controllers in different regions and airports are not consistent in applying procedures, such as approach and departure procedures. There is little accountability in FAA, such as for NextGen delays. Description and examples of challenge cited by stakeholders FAAs’ process for certifying safety, aircraft, avionics, and personnel takes too long, or is inconsistent. FAA lacks adequate performance measures or its measures are output-related, instead of outcome-related. Examples of suggested changes FAA needs a more stable or predictable funding stream. FAA needs to improve human capital activities including updating the air traffic controller handbook, improving the training air traffic controllers receive on new technologies, and streamlining the hiring process. FAA needs to improve communication within the agency to reduce difficulties and delays in the roll out of NextGen technologies and procedures. Streamline processes FAA needs to streamline the development and implementation of flight navigation procedures, the certification of new aircraft equipment, and the acquisition of new technology. While FAA has made improvements involving stakeholders in the planning and implementation of NextGen initiatives, FAA should do more to encourage participation and communication with industry stakeholders. FAA should hold its employees and management accountable for how well they accomplish program and plan goals and for how funds are spent. There needs to be consistent and empowered leadership at FAA. FAA needs to ensure that all NextGen activities are overseen by one NextGen office. FAA needs to create relevant performance measures that measure improvements resulting from the implementation of NextGen. FAA needs to focus on delivering NextGen capabilities with near-term benefits. FAA needs to reconsider how it oversees the industry and/or reduce its layers of oversight. In addition to the individual named above, Catherine Colwell, Assistant Director; Amy Abramowitz; Sarah Arnett; William Colwell; Kevin Egan; Sam Hinojosa; David Hooper; Stuart Kaufman; Jennifer Kim; Josh Ormond; Amy Rosewarne; and Rebecca Rygg made key contributions to this report. FAA Reauthorization Act: Progress and Challenges Implementing Various Provisions of the 2012 Act. GAO-14-285T. Washington, D.C.: February 5, 2014. National Airspace System: Improved Budgeting Could Help FAA Better Determine Future Operations and Maintenance Priorities. GAO-13-693. Washington, D.C.: August 22, 2013. NEXTGEN Air Transportation System: FAA Has Made Some Progress in Midterm Implementation, but Ongoing Challenges Limit Expected Benefits. GAO-13-264. Washington, D.C.: April 8, 2013. Next Generation Air Transportation System: FAA Faces Implementation Challenges. GAO-12-1011T. Washington, D.C.: September 12, 2012. Air Traffic Control Modernization: Management Challenges Associated with Program Costs and Schedules Could Hinder NextGen Implementation. GAO-12-223. Washington, D.C.: February 16, 2012. Next Generation Air Transportation: Collaborative Efforts with European Union Generally Mirror Effective Practices, but Near-term Challenges Could Delay Implementation. GAO-12-48. Washington, D.C.: November 3, 2011. Next Generation Air Transportation System: FAA Has Made Some Progress in Implementation, but Delays Threaten to Impact Costs and Benefits. GAO-12-141T. Washington, D.C.: October 5, 2011. NEXTGEN Air Transportation System: Mechanisms for Collaboration and Technology Transfer Could Be Enhanced to More Fully Leverage Partner Agency and Industry Resources. GAO-11-604. Washington, D.C.: June 30, 2011. Aviation Safety: Status of Recommendations to Improve FAA’s Certification and Approval Processes. GAO-14-142T. Washington, D.C.: October 30, 2013. FAA Facilities: Improved Condition Assessment Methods Could Better Inform Maintenance Decisions and Capital-Planning Efforts. GAO-13-757. Washington, D.C.: September 10, 2013. Air Traffic Control: Characteristics and Performance of Selected International Air Navigation Service Providers and Lessons Learned from Their Commercialization. GAO-05-769. Washington, D.C.: July 29, 2005.
Over the past two decades, U.S. aviation stakeholders have debated whether FAA should be the entity in the United States that operates and modernizes the ATC system. During this period, GAO reported on challenges FAA has faced in operating and modernizing the ATC system. FAA reorganized several times in attempts to improve its performance and implement an initiative to modernize the ATC system, known as NextGen. Recent budgetary pressures have rekindled industry debate about FAA's efficiency in operating and modernizing the ATC system. GAO was asked to gather U.S. aviation industry stakeholder views on the operation and modernization of the current ATC system. This report provides perspectives from a wide range of stakeholders on (1) the performance of the ATC system and the NextGen modernization initiative and any challenges FAA may face in managing these activities and (2) potential changes that could improve the performance of the ATC system, including the NextGen modernization initiative. Based on GAO's knowledge and recommendations from interviewees, GAO interviewed a non-probability, non-generalizable sample of 76 U.S. aviation industry stakeholders—including airlines, airports, labor unions, manufacturers, and general aviation—using a semi-structured format with closed and open-ended questions. GAO also discussed the perspectives with current FAA officials. The Department of Transportation provided technical comments on a draft of this product. The 76 aviation industry stakeholders with whom GAO spoke were generally positive regarding the Federal Aviation Administration's (FAA) operation of the current air traffic control (ATC) system but identified challenges about transitioning to the Next Generation Air Traffic Control System (NextGen). Specifically, the majority of stakeholders rated FAA as moderately to very capable of operating an efficient ATC system, but the majority also rated FAA as only marginally to moderately capable of implementing NextGen, FAA's initiative to modernize the system. Almost all (75) of the stakeholders identified challenges that they believe FAA faces, particularly in implementing the NextGen initiatives. These challenges included difficulty in (1) convincing reluctant aircraft owners to invest in the aircraft technology necessary to benefit from NextGen (46 stakeholders) and (2) mitigating the effects of an uncertain fiscal environment (43 stakeholders). FAA officials acknowledged and generally agreed with these challenges. Sixty four stakeholders suggested a range of changes they believe could improve the efficiency of ATC operations and NextGen's implementation. The change stakeholders suggested most often was to modify how FAA's ATC operations and NextGen programs are funded, including the need to ensure that FAA has a predictable and long-term funding source. Other suggested changes were to improve human capital activities, such as air traffic controllers' training, and improve coordination with industry stakeholders. GAO has reported on these issues in the past, and in some cases, made recommendations, with which FAA concurred but has not yet implemented. GAO also asked stakeholders whether separating ATC services from FAA, such as the privatization of the ATC service provider, was an option; 27 of the stakeholders believed it was an option; another 26 believed it was an option, but had significant reservations about such a change. Support for this option was mixed among categories of stakeholders (see table below). Stakeholders identified several issues that would need to be taken into account before making any changes to the provision of ATC services, including lessons learned from other countries, funding sources for such a system, and the extent of Congress's role in overseeing a separate ATC system. a Maybe represents stakeholders who qualified their “Yes” responses with significant reservations. b Included in this other category are three industry categories with fewer than four stakeholders—Research & Development Organizations, Other Federal Agencies, and Passenger and Safety Groups.
Postsecondary institutions that serve large proportions of economically disadvantaged and minority students are eligible to receive grants from Education through Title III and Title V of the Higher Education Act, as amended, to improve academic and program quality, expand educational opportunities, address institutional management issues, enhance institutional stability, and improve student services and outcomes. Institutions eligible for funding under Titles III and V include Historically Black Colleges and Universities (HBCUs), Tribal Colleges, Hispanic Serving Institutions (HSIs), Alaska Native and Native Hawaiian Institutions, and other undergraduate institutions of higher education that serve low-income students. While these institutions differ in terms of the racial and ethnic makeup of their students, they serve a disproportionate number of financially needy students and have limited financial resources, such as endowment funds, with which to serve them. (See app. I for characteristics of Title III and Title V institutions and their students.) Title III and Title V statutory provisions generally outline broad program goals for strengthening participating institutions, but provide grantees with flexibility in deciding which approaches will best meet their needs. An institution can use the grants to focus on one or more activities that will help it achieve the goals articulated in its comprehensive development plan—a plan that each applicant must submit with its grant application outlining its strategy for achieving growth and self-sufficiency. The statutory and regulatory eligibility criteria for all of the programs, with the exception of the HBCU program, contain requirements that institutions applying for grants serve a significant number of economically disadvantaged students. See table 1 for additional information about eligibility requirements. Historically, one of the primary missions of Title III has been to support Historically Black Colleges and Universities, which play a significant role in providing postsecondary opportunities for African American, low- income, and educationally disadvantaged students. These institutions receive funding, in part, to remedy past discriminatory action of the states and the federal government against black colleges and universities. For a number of years, all institutions that serve financially needy students— both minority serving and nonminority serving—competed for funding under the Strengthening Institutions Program, also under Title III. However, in 1998, the Higher Education Act was amended to create new grant programs specifically designated to provide financial support for Tribal Colleges, Alaska Native and Native Hawaiian Institutions, and Hispanic Serving Institutions. These programs have provided additional opportunities for Minority Serving Institutions to compete for federal grant funding. In 1999, the first year of funding for the expanded programs, 55 Hispanic Serving, Tribal, Alaska Native, and Native Hawaiian Institutions were awarded grants, and as of fiscal year 2006, 197 such institutions had new or continuation grants. (See table 2). The grant programs are designed to increase the self-sufficiency and strengthen the capacity of eligible institutions. Congress has identified many areas in which institutions may use funds for improving their academic programs. Authorized uses include, but are not limited to, construction, maintenance, renovation or improvement of educational facilities; purchase or rental of certain kinds of equipment or services; support of faculty development; and purchase of library books, periodicals, and other educational materials. In their grant performance reports, the six grantees we recently reviewed most commonly reported using Title III and Title V grant funds to strengthen academic quality; improve support for students and student success; and improve institutional management and reported a range of benefits. To a lesser extent, grantees also reported using grant funds to improve their fiscal stability. However, our review of grant files found that institutions experienced challenges, such as staffing problems, which sometimes resulted in implementation delays. Efforts to Improve Academic Quality—Four of the six grantees we reviewed reported focusing at least one of their grant activities on improving academic quality. The goal of these efforts was to enhance faculty effectiveness in the classroom and to improve the learning environment for students. For example, Ilisagvik College, an Alaska Native Serving Institution, used part of its Title III, part A Alaska Native and Native Hawaiian grant to provide instruction and student support services to prepare students for college-level math and English courses. According to the institution, many of its students come to college unprepared for math and English, and grant funds have helped the school to increase completion rates in these courses by 14 percentage points. Efforts to Improve Support for Students and Student Success—Four of the six grantees we reviewed reported focusing at least one of their grant activities on improving support for students and student success. This area includes, among other things, tutoring, counseling, and student service programs designed to improve academic success. Sinte Gleska, a tribal college in South Dakota, used part of its Title III grant to fund the school’s distance learning department. Sinte Gleska reported that Title III has helped the school develop and extend its programs, particularly in the area of course delivery through technology. In addition, the school is able to offer its students access to academic and research resources otherwise not available in its rural isolated location. Efforts to Improve Institutional Management—Four of the six grantees we reviewed reported focusing at least one of their grant activities on improving institutional management. Examples in this area include improving the technological infrastructure, constructing and renovating facilities, and establishing or enhancing management systems, among others. For example, Chaminade University, a Native Hawaiian Serving Institution, used part of its Title III grant to enhance the school’s academic and administrative information system. According to Chaminade University, the new system allows students to access class lists and register on-line, and readily access their student financial accounts. Additionally, the Title III grant has helped provide students with the tools to explore course options and develop financial responsibility. Efforts to Improve Fiscal Stability at Grantee Institutions—Two of the six institutions we reviewed reported focusing at least one of their grant activities on improving its fiscal stability. Examples include activities such as establishing or enhancing a development office, establishing or improving an endowment fund, and increasing research dollars. Development officers at Concordia College, a historically black college in Alabama, reported using its Title III grant to raise the visibility of the college with potential donors. While grantees reported a range of uses and benefits, four of the six grantees also reported challenges in implementing their projects. For example, one grantee reported delays in implementing its management information system due to the turn-over of experienced staff. Another grantee reported project delays because needed software was not delivered as scheduled. In addition, Education officials told us that common problems for grantees include delays in constructing facilities and hiring. As a result of these implementation challenges, grantees sometimes need additional time to complete planned activities. For example, 45 percent of the 49 grantees in the Title V, developing Hispanic Serving Institutions program that ended their 5-year grant period in September 2006 had an available balance greater than $1,000, ranging from less than 1 percent (about $2,500) to 16 percent (about $513,000) of the total grant. According to Education regulations, grantees generally have the option of extending the grant for 1 year after the 5-year grant cycle has ended to obligate remaining funds. Education has established a series of new objectives, strategies, and performance measures that are focused on key student outcomes for Title III and Title V programs. As part of Education’s overall goal for higher education within its 2007-2012 Strategic Plan, Education established a supporting strategy to improve the academic, administrative, and fiscal stability of HBCUs, HSIs, and Tribal Colleges. Education has also established objectives in its annual program performance plans to maintain or increase student enrollment, persistence, and graduation rates at all Title III and Title V institutions, and has developed corresponding performance measures. When we reported on Education’s strategic planning efforts in our 2004 report, it measured its progress in achieving objectives by measuring outputs, such as the percentage of institutional goals that grantees had related to academic quality that were met or exceeded. However, these measures did not assess the programmatic impact of its efforts. Education’s new objectives and performance measures are designed to be more outcome focused. In addition, the targets for these new performance measures were established based on an assessment of Title III and Title V institutions’ prior performance compared to performance at all institutions that participate in federal student financial assistance programs. Education officials told us that they made these changes, in part, to address concerns identified by the Office of Management and Budget that Education did not have specific long-term performance measures that focus on outcomes and meaningfully reflect the purpose of the program Education needs to take additional steps to align some of its strategies and objectives, and develop additional performance measures. GAO has previously reported that performance plans may be improved if strategies are linked to specific performance goals and the plans describe how the strategies will contribute to the achievement of those goals. We found insufficient links between strategies and objectives in Education’s strategic plans and annual program performance plans. Specifically, Education needs to better link its strategies for improving administrative and fiscal stability with its objectives to increase or maintain enrollment, persistence, and graduation rates because it is unclear how these strategies impact Education’s chosen outcome measures. In fact, GAO and other federal agencies have previously found Education faces challenges in measuring institutional progress in areas such as administrative and fiscal stability. To address part of this problem, Education is conducting a study of the financial health of low-income and minority serving institutions supported by Title III and Title V funds to determine, among other things, the major factors influencing financial health and whether the data Education collects on institutions can be used to measure fiscal stability. Education officials expect the study to be completed in 2008. Education made changes designed to better target monitoring and assistance in response to recommendations we made in our 2004 report; however, additional work is needed to ensure the effectiveness of these efforts. Specifically, we recommended that the Secretary of Education take steps to ensure that monitoring and technical assistance plans are carried out and targeted to at-risk grantees and the needs of grantees guide the technical assistance offered. Education needed to take several actions to implement this recommendation, including completing its electronic monitoring tools and training programs to ensure that department staff are adequately prepared to monitor and assist grantees and using appropriately collected feedback from grantees to target assistance. Education has taken steps to better target at-risk grantees, but more information is needed to determine its effectiveness. In assessing risk, department staff are to use a variety of sources, including expenditure of grant funds, review of performance reports, and federally required audit reports. However, according to a 2007 report issued by Education’s Office of Inspector General, program staff did not ensure grantees complied with federal audit reporting requirements. As a result, Education lacks assurance that grantees are appropriately managing federal funds, which increases the potential risk for waste, fraud, and abuse. In addition to reviewing grantee fiscal, performance, and compliance information, program staff are also required to consider a number of factors affecting the ability of grantees to manage their grants in the areas of project management and implementation, funds management, communication, and performance measurement. Education reports that identifying appropriate risk factors have been a continuous process and that these factors are still being refined. On the basis of results of the risk assessments, program staff are to follow up with grantees to determine whether they are in need of further monitoring and assistance. Follow-up can take many forms, ranging from telephone calls and e-mails to on-site compliance visits and technical assistance if issues cannot not be readily addressed. In targeting grantees at risk, Education officials told us that the department has recently changed its focus to improve the quality of monitoring while making the best use of limited resources. For example, Education officials said that risk criteria are being used to target those grantees most in need of sites visits rather than requiring staff to conduct a minimum number each year. Based on information Education provided, program staff conducted site visits at 28 of the 517 institutions receiving Title III and Title V funding in fiscal year 2006, but a more extensive review is required to determine the nature and quality of them. Education’s ability to effectively target monitoring and assistance to grantees may be hampered because of limitations in its electronic monitoring system, which are currently being addressed. Education implemented this system in December 2004 and all program staff were required to use the system as part of their daily monitoring activities. The system was designed to access funding information from existing systems, such as its automated payment system, as well as to access information from a departmental database that contains institutional performance reports. According to Education, further refinements to its electronic monitoring system to systematically track and monitor grantees. For example, the current system does not allow users to identify the risk by institution. Education also plans to automate and integrate the risk-based plan with their electronic monitoring system. Education anticipates the completion of system enhancements by the end of 2007. Because efforts are ongoing, Education has limited ability to systematically track grantee performance and fiscal information. Regarding training, Education reports that it has expanded course offerings to program staff specific to monitoring and assistance. Education officials told us that the department has only a few mandated courses, but noted that a number of training courses are offered, such as grants monitoring overview and budget review and analysis, to help program staff acquire needed skills for monitoring and assistance. However, because Education recently moved to a new training recordkeeping system that does not include information from prior systems, we were unable to determine the extent to which program staff participated in these offerings. We reported in 2004 that staff were unaware of the guidelines for monitoring grantees and more information is needed to determine the extent to which new courses are meeting the needs of program staff. While Education provides technical assistance through program conferences, workshops, and routine interaction between program officers and grantees, Education’s ability to target assistance remains limited, in that its feedback mechanisms may not encourage open communication. Education officials told us that they primarily rely on grantee feedback transmitted in annual performance reports and communication between program officers and grantees. As we reported in 2004, Education stated that it was considering ways to collect feedback separate from its reporting process for all its grant programs but no such mechanisms have been developed. We previously recommended that the Secretary of Education take steps to ensure that monitoring and technical assistance plans are carried out and targeted to at-risk grantees and the needs of grantees guide the technical assistance offered. These steps should include completing its automated monitoring tools and training programs to ensure that department staff are adequately prepared to monitor and assist grantees and using appropriately collected feedback from grantees to target assistance. Education agreed with our recommendation, and has taken actions to target its monitoring and technical assistance to at-risk grantees. However, additional study is needed to determine the effectiveness of these efforts. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information regarding this testimony, please contact me at (202) 512-7215. Individuals making key contributions to this testimony include Debra Prescott, Tranchau (Kris) Nguyen, Claudine Pauselli, Christopher Lyons, Carlo Salerno, Sheila McCoy, and Susan Bernstein. Federal grants include Pell Grants and other federal grants awarded to individual students. This is an admission policy whereby the institution will accept any student who applies. (2) Data for average percentage of students with federal grant aid is from fiscal year 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Institutions that may receive funding under Titles III and V include Historically Black Colleges and Universities (HBCUs), Tribal Colleges, Hispanic Serving Institutions, Alaska Native Serving Institutions, Native Hawaiian Serving Institutions, and other postsecondary institutions that serve low-income students. In fiscal year 2006, these programs provided $448 million in funding for over 500 grantees, nearly double fiscal year 1999 funding of $230 million. GAO examined these programs to determine (1) how institutions used their Title III and Title V grants and the benefits they received from using these grant funds, (2) what objectives and strategies the Department of Education (Education) has developed for Title III and Title V programs, and (3) to what extent Education monitors and provides assistance to these institutions. This testimony updates a September 2004 report on these programs (GAO-04-961). To update our work, GAO reviewed Education policy and planning documents, and program materials and grantee performance reports; interviewed Education officials; and analyzed Education data on grantee characteristics. In their performance reports, the six grantees we reviewed most commonly reported using Title III and Title V grant funds to strengthen academic quality; improve support for students and student success; and improve institutional management and reported a wide range of benefits. For example, Sinte Gleska, a tribal college in South Dakota, used part of its Title III grant to fund the school's distance learning department, to provide students access to academic and research resources otherwise not available in its rural isolated location. Our review of grant files found that institutions experienced challenges, such as staffing problems, which sometimes resulted in implementation delays. For example, one grantee reported delays in implementing its management information system due to the turn over of experienced staff. As a result of these implementation challenges, grantees sometimes need additional time to complete planned activities. Although Education has established outcome based objectives and performance measures, it needs to take steps to align some strategies and objectives, and develop additional performance measures. Education has established an overall strategy to improve the academic, administrative, and fiscal stability of grantees, along with objectives and performance measures focused on student outcomes, such as graduation rates. In 2004, we reported that Education's strategic planning efforts in were focused on program outputs that did not assess programmatic impacts, such as the percentage of goals that grantees met or exceeded, rather than outcomes. While Education has made progress in developing outcome based measures, we found insufficient links between its strategies for improving administrative and fiscal stability with its student outcome objective. To address challenges in measuring institutional progress in areas such as administrative and fiscal stability, Education is conducting a study of the financial health of low income and minority serving institutions supported by Title III and Title V. Education has made changes to better target monitoring and assistance in response to recommendations GAO made in 2004, however, additional study is needed to determine the effectiveness of these efforts. For example, Education uses risk indicators designed to better target grantees that may require site visits. While Education implemented an electronic monitoring system, it lacks the ability to systematically track grantee performance as designed. While Education provides technical assistance through various methods, its ability to target assistance remains limited in that its feedback mechanisms may not encourage open communication. Specifically, Education relies on grantee performance reports that are tied to funding decisions to solicit feedback.
The National Capital Revitalization and Self-Government Improvement Act, Public Law 105-33 (the Revitalization Act), approved August 5, 1997, directed the Authority and the District of Columbia government to develop and implement management reform plans for nine major city agencies and four citywide functionsduring fiscal years 1998 and 1999. Funding for management reform was to be provided, for the most part, by existing budget authority within agencies and by the fiscal year 1998 surplus that resulted from the federal government’s assumption of the cost of certain functions previously financed through District revenue. The law gave the Authority the power to allocate surplus funds to management reform projects. The Authority reported in the FiscalYear1998Annual PerformanceReport:AReportonServiceImprovementsandManagement Reform,dated October 30, 1998, that the projects were selected using management reform criteria of customer satisfaction; empowering employees; long-term service delivery improvements; and greater internal capacity (through infrastructure changes, staff training, and automation). In September 1997, the Authority hired 11 consultants, at a cost of $6.6 million, to develop management reform plans for these agencies and functions. The District’s management reform team, consisting of the Chairman of the Authority, the former Mayor, the Chairman of the City Council, and the heads of each agency, approved the projects for implementation. The Authority then hired a Chief Management Officer (CMO) who was delegated responsibility for these projects. The CMO implemented a system to manage these projects that included development of operational plans, identification of the official directly responsible for each project, and periodic monitoring of each project. Agencies were required to report monthly on expenditures and the results of the projects. On October 30, 1998, the Authority reported in its Fiscal Year1998AnnualPerformanceReportthat 69 projects had been completed. In January 1999, the Authority returned responsibility for the nine city agencies and four citywide functions to the newly elected Mayor. District officials told us that the current administration established a new reform agenda that incorporated a small number of the remaining management reform projects. Specifically, the District selected 20of the remaining 200 projects that it considered to be the best projects to be continued in fiscal year 1999. The District also initiated 7 new projects, for a total of 27 projects that were funded in fiscal year 1999. To determine the status and results of the District’s management reform initiatives, we reviewed pertinent financial documents and reports provided by the Office of the Chief Financial Officer (OCFO), the Authority, the office of the former CMO, and the D.C. City Council. We also interviewed the Deputy Mayor for Operations, the Chief Financial Officer, and other officials from those offices and the Authority. We did not audit the District’s management reform funds or expenditures, and accordingly, we do not express an opinion or any other form of assurance on these reported amounts. Our work was done in accordance with generally accepted government auditing standards between April and June 2000. I will now discuss in more detail the matters I highlighted earlier. Authority and District officials have not consistently tracked the disposition of management reform initiatives from fiscal years 1998 and 1999. These officials were unable to provide adequate information on whether these management reform projects from fiscal years 1998 and 1999 achieved their intended goals or objectives. Although this information may be available on an agency-by-agency basis, currently, the District has no systematic process for monitoring and reporting on this information. During fiscal years 1998 and 1999, the District budgeted over $300 million to begin implementing over 250 management reform projects. The reported fiscal year 1998 investment in management reform of about $293 million included $112.6 million of operating fundsand $180.3 million of capital funds. For fiscal year 1999, the investment in management reform of $36.2 million included $30.9 million of operating funds and $5.3 million of capital funds. Of the $36.2 million, about $33 million was federal appropriations provided to the Authority specifically for management reform.Table 1 shows the total funds provided to the District for management reform for fiscal years 1998 through 2000, the amounts reported as obligated, estimated savings from those initiatives, and reported savings from those initiatives. The status of the funds appropriated for the management reform projects initially identified in fiscal year 1998 and the disposition of those projects is as follows: The District reported in its Final Fiscal Year 1998 Management Reform Summary of Operating and Capital Funds, as of September 30, 1998, that of the $292.8 million budgeted for management reform, approximately $126.9 million had been spent and about $165.9 million was available at the end of fiscal year 1998. Of this amount, approximately $2.3 million of operating funds lapsed,resulting in about $163.6 million remaining at the end of fiscal year 1998. About $3.2 million of operating funds (included in the $163.6 million above), which was not allocated to any particular project, was carried over to fiscal year 1999 for management reform projects in accordance with the District of Columbia Appropriations Act of 1999, Public Law 105-277. The remainder was allocated to 35 former management reform initiatives that were designated as capital projects and were no longer considered part of the management reform program. According to the District’s Expenditure Data on Capital Projects report, the $160.4 million in capital funds (included in the $163.6 million previously mentioned) unspent at the end of fiscal year 1998 was carried over into fiscal year 1999 for the 35 projects. Included in the 35 projects were initiatives for the Automated Integrated Tax System, implementing the Real Property Inventory System, and implementing a new Motor Vehicle Information System. According to the Authority, 69 projects had been completed. Included in the completed projects were the modification of the Department of Corrections Employee Pay Plan and an increase in the number of building inspections. Although the District’s Final Fiscal Year 1998 Management Reform Summary reported fiscal year spending on these management reform projects totaling about $127 million, the District could not specifically identify the amount of funds spent that was used to pay consultants, contractors, and District employees. According to District officials, the former CMO requested information regarding funds spent for consultants and contractors from the agencies during fiscal year 1998. This information was reported to the OCFO on a monthly basis. However, we found that the data was inconsistent, and no such information related to these management reform projects was requested in fiscal year 1999. The District, however, has acknowledged that management reform funds were used for projects other than management reform; for example, about $11.3 million was used for the pay increase for District of Columbia Public School teachers. District officials told us that the new administration of Mayor Williams inherited approximately 200 projects in various stages of completion. Rather than continue with the entire agenda, the new administration reviewed the projects and selected those it considered to be the best projects for incorporation into agencies’ long-term plans. In consultations with the Authority, the new administration chose the 20 best projects and added 7 new projects, giving it a total of 27 projects funded in fiscal year 1999. To implement these 27 management reform projects during fiscal year 1999, the District budgeted approximately $36.2 million, $33 million of which was federal appropriations. Twenty-six of these projects were funded with $30.9 million in operating funds and one project received about $5.3 million in capital funds. The District reported in its Fiscal Year 1999 Management and Regulatory Reform Funds, Agency Expenditure Summary as of May 15, 2000, that of the $36.2 million budgeted, approximately $29.1 million had been spent and about $7.1 million lapsed at the end of fiscal year 1999. As of June 16, 2000, the District had not determined the status of the 27 management reform projects for fiscal year 1999. In February 2000, the Office of the Deputy Mayor for Operations asked District agencies responsible for the projects to provide information on the original project goals and the results that had been achieved. According to District officials, they obtained information on only a few projects. The Deputy Mayor for Operations told us that he expected to have the status of each project by mid-June. As of June 26, 2000, we had not received this information. Included in the fiscal year 1999 budget was a line item that indicated that management reform initiatives would save approximately $10 million. District officials told us that these estimated savings were based on assumptions by the former CMO that an investment of about $93 million in operating funds would yield permanent cost savings. The estimated savings by agency were not defined in the fiscal year 1999 Appropriations Act; therefore, District officials determined the allocated savings based on the amount of each agency’s management reform investment. As of June 1, 2000, of the $10 million expected in savings, District officials reported that about 15 percent, or $1.5 million, had been realized. The fiscal year 2000 budget also included $41 million of projected savings from various initiatives, including management reform productivity savings. However, in discussions with us, District officials said that the management reform productivity savings and other savings included in the fiscal year 2000 budget are not likely to be realized. The $41 million in projected savings was comprised of the following: $7 million in management reform productivity savings; $14 million in savings resulting from the implementation of the District of Columbia Supply Schedule; and $20 million in productivity banksavings. The District does not know whether any savings will be realized from the $7 million of management reform productivity savings. District officials told us that the former CMO and the Authority set the goal of $7 million; however, no one identified the savings targets related to specific management reform initiatives prior to the formulation of the fiscal year 2000 budget. The District does not expect any savings in fiscal year 2000 from the $14 million, which was to be derived from the District’s establishment of a District Supply Schedule. District officials told us that the new Chief Procurement Officer had reviewed the D.C. Supply Schedule initiative in the summer of 1999 and determined that it did not offer advantages beyond existing federal schedules that District agencies were already utilizing. The District expects no savings from the $20 million productivity bank project, nor are these savings directly related to any management reform initiatives. The timing of congressional approval of the federal budget resulted in productivity bank funds not being available to agencies until the second quarter of fiscal year 2000. According to District officials, the timing of the budget approval, combined with the same year repayment requirement, has discouraged agencies from taking advantage of this fund, as productivity savings are often realized in small amounts within the first year and in increasing amounts in subsequent years. Originally proposed by the previous Mayor’s 1996 plan, ATransformed GovernmentofthePeopleofWashington,D.C., this group of initiatives was included as an appendix to the fiscal year 1998 budget. The projects, which ranged from reducing the number of District employees to streamlining services to promote economic development, were estimated to save about $152.4 million. According to District and Authority officials, many of the initiatives listed in the plan have not been implemented and no savings have been realized. In many instances, the initiatives have been overtaken by other events, such as the National Capital Improvement and Revitalization Act of 1997. Because so few of the initiatives have been implemented, District officials told us that information is not available to determine the net benefit to the District either in terms of dollars saved or improved efficiencies and effectiveness of District services. Since fiscal year 1998, the District Government has budgeted over $300 million to implement management reform initiatives or projects. During this same period, District budgets have stated that management reform initiatives and other cost-saving initiatives would save about $200 million. To date, only $1.5 million of management reform savings have been documented. Additional savings might have been realized, but the Authority and District officials had not systematically assessed project results and savings. In addition, they did not adequately track the costs of these projects and, as a result, sufficient information is not available to show how these funds were spent. These management reform projects and targeted savings have been an integral part of recent District budgets and identify important reforms needed to improve services. Mr. Chairman, this concludes my statement. I will be happy to answer any questions that you or Members of the Subcommittee may have. For further information regarding this testimony, please contact Gloria L. Jarmon at (202) 512-4476 or by e-mail at [email protected]. Individuals making key contributions to this testimony included Norma Samuel, Linda Elmore, Timothy Murray, and Bronwyn Hughes. Event unobligated as of the end of the fiscal year. The President signed into law the District of Columbia FY 2000 Appropriations Act, P. L. 106-113. The act directed the CFO of the District to make reductions of $7 million for management reform savings in local funds to one or more of the appropriation headings in the act. The Authority notified the Deputy Mayor for Operations that it was conducting a closeout review of the results of the management reform initiatives and requested information about each of the initiatives through calendar year 1999. The Deputy Mayor for Operations distributed a survey to the agencies requesting the results of the fiscal year 1999 management reform initiatives. The Deputy Mayor for Operations provided the Authority a partial response to its January 14, 2000, request. The Authority notified the Deputy Mayor for Operations of its intent to finalize its closeout review of the results of the fiscal year 1999 management reform initiatives. To track the results of the initiatives, the Authority asked the Deputy Mayor for Operations to submit survey responses from each agency. The District’s proposed Fiscal Year 2001 Operating Budget and Financial Plan includes an estimated $37 million in management reform productivity savings. The Deputy Mayor for Operations provided GAO with a draft project status report of the fiscal year 1999 management reform operating projects as of September 30, 1999. (916354)
Pursuant to a congressional request, GAO discussed the District of Columbia's management reform initiatives. GAO noted that: (1) over the past 3 fiscal years, the District government has proposed hundreds of management reform initiatives that were estimated to save millions of dollars as well as improve government services; (2) however, as of June 1, 2000, the District had only reported savings of about $1.5 million related to these initiatives and had not consistently tracked the status of these projects; (3) neither the District of Columbia Financial Responsibility a nd Management Assistance Authority nor the District could provide adequate details on the goals achieved for all of the projects that had been reported as completed or in various stages of completion; (4) the District does not have a systematic process to monitor these management reform projects and determine where savings or customer service improvements have been realized; and (5) the District cannot say for certain how funds designated for management reform have been spent or whether the key goals of these initiatives have been realized.
Advances in the use of information technology and the Internet are transforming the way federal agencies communicate, use information, deliver services, and conduct business. To increase the ability of citizens to interact with the federal government electronically, in 1998 the Congress enacted GPEA. GPEA makes OMB responsible for ensuring that federal agencies meet the act’s October 21, 2003, implementation deadline. In May 2000, OMB issued GPEA implementation guidance, which lays out a process and principles for agencies to employ in evaluating the use and acceptance of electronic documents and signatures. The guidance calls for agencies to examine business processes that might be revamped to employ electronic documents, forms, or transactions; identify customer needs and demands; consider the costs, benefits, and risks associated with making the transition to electronic environments; and develop plans and strategies for recordkeeping and security. In September 2000, we concluded that OMB’s GPEA guidance—as well as the guidance and supplementary efforts being undertaken by Treasury, the National Archives and Records Administration, the Departments of Justice and Commerce and others— provided a useful foundation of information to assist agencies with GPEA implementation and the transition to electronic government (e- government). Our report also laid out information technology management challenges that are fundamental to the success of GPEA. OMB’s May guidance also required each agency, by October 2000, to develop and submit a GPEA implementation plan and schedule. According to this guidance, these plans were to prioritize implementation of systems and system modules based on achievability and net benefit. Further, agencies were required to coordinate their GPEA plans and schedules with their strategic information technology (IT) planning activities and report progress annually. In July 2000 OMB issued supplemental guidance that provided a structured, standardized format for agency reporting of GPEA implementation plans. Unlike the May 2000 guidance, which discussed a wide range of activities needed for an agency to comply with GPEA, this new guidance focused on specific kinds of data that OMB was expecting agencies to submit in the October 2000 plans. The new guidance specified that the plans be divided into four parts: First, agencies were to provide a cover letter describing their overall strategy and actions to comply with the act. This letter is the part of the plan that provides an agencywide perspective on GPEA compliance efforts. Second, agencies were required to provide data in tabular form regarding information-collection activities approved by OMB under the Paperwork Reduction Act (PRA), which mandates that OMB review how agencies collect and use information. The data tables were to include a column showing when an electronic option would be completed (if one was being planned) and whether electronic signatures were to be used. Third, agencies were requested to provide an additional table showing interagency reporting, information-dissemination activities, and other agency-identified transactions. According to OMB’s guidance, “interagency reporting” encompasses ongoing, periodic reports, such as personnel and payroll reports, which are exchanged among agencies. “Information-dissemination activities” refers to information products intended for the general public, such as the periodic release of labor statistics. Like the PRA-based inventory, this list was to include a column showing when an electronic option would be completed, if planned, and whether electronic signatures were to be used. Lastly, supplemental information was also to be provided about any of the previously listed transactions that the agency had determined to pose a “high risk,” such as those involving particularly sensitive information or very large numbers of respondents. This section of the plan was to include a description of the transactions, their sensitivity, and additional risk management measures that would be taken. Let me now turn to the three agency plans you asked us to review. According to Treasury’s plan, the department’s GPEA-related activities are a critical component of the overall departmental effort to fundamentally redefine the way it performs its critical missions. According to the plan, a key element of that effort was the development of an e-government strategic plan—just published this month—which Treasury is using as a framework for selecting and implementing electronic initiatives. In addition to its internal initiatives, Treasury’s plan notes that the department has been involved in governmentwide actions to advance electronic government and comply with GPEA. A key example is Pay.gov, an Internet portal developed by its Financial Management Service. According to the plan, the services of Pay.gov can help agencies meet GPEA requirements to accept forms electronically by 2003 by offering a package of electronic financial services to assist agencies, such as enabling end-users to submit agency forms and authorize payments, presenting agency bills to end-users, and establishing the identity of end- users and reporting information about transactions back to the agencies. Once fully operational, this service could help agencies throughout the federal government to more easily reach the goals of GPEA. According to the department’s deputy chief information officer (CIO), the progress of major GPEA-related initiatives at Treasury is being monitored through monthly CIO meetings with representatives from each of the department’s various bureaus and by using an investment management tool. The Deputy CIO added that compliance with GPEA is also included in the criteria that Treasury uses in its investment review process for evaluating newly proposed information technology projects. Treasury used its database of information collections identified under PRA as a starting point for preparing the required data tables for its GPEA implementation plan. PRA information collections include such things as requests for forms and publications, tax-related forms, and business- production reports. To refine the list, the department’s CIO organization convened a group comprising representatives from Treasury’s IT policy and strategy group, CIO development team, bureau representatives, and policy office representatives. The group reviewed the PRA collections and added a records management initiative that had not been part of the original database. Treasury’s plan provides the kind of information stipulated in OMB’s July 2000 guidance. Altogether, Treasury identified 336 PRA information- collection processes that are subject to GPEA. According to the plan, 23 of these are scheduled for conversion to an electronic option in 2001, 36 are scheduled for 2002, and 84 are scheduled for 2003. Of the remaining initiatives, 80 were reported to already be converted, two are scheduled for conversion in 2004, and 111 were not assigned a completion date for conversion. In all but one case where the conversion date was beyond October 2003 or not assigned, Treasury included explanations, as required by OMB’s guidance. Further, Treasury identified 105 initiatives offering an electronic option for interagency reporting, information-dissemination activities, and other transactions, and four transactions identified as high risk. For those initiatives included in Treasury’s plan that did not specify completion dates, the department plans to include that information when it becomes available, according to the deputy CIO. The plan also is expected to be updated as the bureaus and department offices make progress toward completing its initiatives. According to its October 2000 plan, EPA is currently undertaking three major activities in an effort to provide e-government services and comply with GPEA. The first initiative is to establish a new rule that would permit electronic reporting and recordkeeping and establish the requirements necessary to ensure that electronic documents are valid and authentic. EPA has drafted the proposed new rule, and it is currently being reviewed by administration officials. Agency officials expect it to be approved this year, with a final rule to be published in 2002. The second major initiative is the development of a computer network facility known as the Central Data Exchange. This new facility is to be the central point of entry for all electronic reporting, and is expected to provide security, authentication, error detection, and distribution capabilities. EPA expects the facility to be fully operational by the fall of 2002. The third major initiative is to improve EPA’s information security. We have previously reported on significant weaknesses in EPA’s information security program. The October 2000 plan states that the agency has made significant progress in improving its cyber defenses by implementing security confidentiality protocols and procedures. Further, agency officials state that they are actively exploring the use of electronic signatures and public key infrastructure (PKI) technology to ensure the security, confidentiality, and non-repudiation of sensitive data collections. EPA used an iterative process to develop its October 2000 plan. Starting with its internal PRA database as a baseline, Office of Environmental Information personnel created a template of information collections that was sent to each program office for validation and for completion of additional GPEA-related data. The agency’s final plan contains a detailed inventory of its PRA information collections. An EPA official said that this inventory and its related attachments include all of the information regarding plans for electronic interagency reporting, information dissemination activities, and high-risk transactions, as required by OMB. EPA identified 279 data-collection activities applicable to GPEA. Through iterative reviews, it determined that 108 of these were not candidates for electronic reporting for reasons such as that they involved interaction with only a few members of the public or because filling out a paper form was deemed to not be a significant burden. According to the agency’s plan, of the 171 data collections that were considered suitable for electronic reporting, 21 have already been converted, 3 are scheduled for 2001, 13 are scheduled for 2002, and 96 are scheduled for 2003. The remaining 38 data collections that will not be ready for electronic reporting by the GPEA deadline all involve the reporting of confidential business information. The electronic transmission of this type of data poses additional risks that EPA does not plan to have fully addressed by October 2003. Agency officials state that they are in the process of assessing these data collections to determine how to collect these data centrally and in a secure form. By 2003 they expect that they will be testing methods of secure transmission but do not expect them to be operational until after the GPEA deadline. According to EPA officials, in anticipation of a request by OMB for updated information on the data-collection inventories, they sent a letter to the program offices asking for such updated information. Using these responses, EPA officials plan to update their data-collection inventory. DOD’s October 2000 GPEA plan does not include a description of the department’s overall strategy and efforts to comply with GPEA. Likewise, DOD officials could not provide us with documentation specifically addressing a departmentwide implementation strategy. Officials from DOD’s Office CIO told us that major GPEA-related activities within the department are focused on enabling and enhancing electronic business applications and that the department’s strategic plans for business process transformation include objectives that incidentally address the goals of GPEA. Examples include the department’s paperless contracting project—which aims to achieve paperless processes for many aspects of contracting and invoicing—and its Central Contractor Registration System, which contains electronic information about contractors and vendors. The bulk of DOD’s departmentwide activity is focused on developing a PKI to control access to sensitive information and provide security for electronic transactions via digital signatures. To assemble the department’s plan, officials from the CIO’s office began by providing the military services and other departmental components with listings of their information collections reported under PRA and requested that they provide GPEA information for those items and add any others that might be appropriate. The services and components, in turn, relayed the data requests to their sub-components until a level was reached that could provide information about the specific collections. The data were then reported back up to the office of the CIO, where they were consolidated into a single report for OMB. The data tables provided in DOD’s plan generally conform to the format specified in OMB’s July 2000 guidance. The tables indicate that DOD conducted 449 information collection-activities meeting OMB’s reporting requirements for PRA. They also identify 13 interagency reporting and information dissemination activities, as well as four transactions that were determined to pose a high risk. The Office of the CIO did not review the data it received from the various DOD components for completeness or accuracy before reporting the information to OMB in October 2000. In reviewing the data, we found indications that some may be inaccurate, incomplete, or duplicative. For example, the Defense Security Service made 238 entries for data-collection activities that included little of the information requested by OMB and appeared, in many cases, not appropriate as separate entries. In discussions with us, DOD officials agreed that the Defense Security Service had reported incomplete and possibly inaccurate information and said that they would request that the service correct it. The Office of the CIO has taken steps to follow up on the information submitted by the military services and DOD components. In January 2001, the CIO issued a memorandum to the services and components forwarding OMB’s May 2000 guidance on GPEA implementation. The memo stated that CIOs of the DOD components would be expected to apply it during their continued planning, development, redesign, operation, and oversight of department systems. According to CIO officials, this memo is the first formal DOD guidance document specifically addressing GPEA. Further, in April, the DOD CIO office requested that the services and components review the accuracy of their portions of the GPEA implementation plan. However, DOD CIO officials indicated that only one official—from the Office of the Assistant Secretary of Defense (Public Affairs)—had responded to this information request, and that was to correct possible errors for a single item. Mr. Chairman, you also asked us to assess the Personnel and Readiness portion of DOD’s plan. For this category, DOD reported 76 PRA information-collection activities and ten interagency reporting and information-dissemination activities. DOD provided a projected completion date for one of the 76 PRA-type activities and for two of the ten interagency and information-dissemination activities. Additionally, we found that 38 of the 76 PRA information collections and four of the ten interagency reporting and information-dissemination activities were likely duplicate entries. We met with officials from the Office of the CIO and the Undersecretary of Defense for Personnel and Readiness and pointed out the potential duplication. The officials agreed and subsequently notified us that Personnel and Readiness had corrected the discrepancies. In our discussions with agency officials, several themes emerged as significant challenges in meeting the goals of GPEA. First, all three agencies have determined that the security assurances provided through the use of PKI technology will be needed to enable many of their sensitive electronic transactions. As I mentioned earlier, DOD’s Office of the CIO is developing a departmentwide PKI, and the office is working with the General Services Administration (GSA) to make its PKI interoperable with GSA’s governmentwide Access Certificates for Electronic Services program. EPA is also pilot-testing the use of electronic signatures and digital certificates through GSA’s program, and has applied for a grant from GSA to conduct a PKI interoperability project. Treasury is also closely involved in the governmentwide effort to develop PKI, having recently chaired the CIO Council’s Federal PKI Steering Committee. According to Treasury’s deputy CIO, the department will be challenged to develop its own PKI because it will need to pool resources from, and coordinate activities with, all of its bureaus. Second, EPA and Treasury both commented about the importance of adequately planning for and implementing computer network and telecommunications infrastructures to provide the capacity and connectivity needed to support the electronic traffic generated by new or enhanced electronic offerings. According to agency officials, many types of transactions covered by GPEA will require the support of new enterprisewide infrastructure. For example, EPA’s Central Data Exchange project is a major infrastructure undertaking that will be critical to enabling the electronic exchange of information between EPA and state environmental agencies. Likewise, Treasury is developing the Treasury Communications Enterprise to provide a common departmentwide communications infrastructure to support electronic government initiatives throughout the department. Third, agencies will need adequate capabilities for storing, retrieving, and disposing of electronic records. EPA officials expressed concern about the status of governmentwide electronic recordkeeping standards, which have not yet been finalized. Many electronic systems are already being developed and implemented that may be incompatible with future standards. As we reported last September, federal agencies face additional information management challenges that are also fundamental to the success of GPEA. Specifically, agencies will need to use disciplined investment management practices to ensure that the full costs of providing electronic filing, recordkeeping, and transactions prompted by GPEA are identified and examined within the context of expected benefits; and ensure that IT human capital needs are addressed so that staff can effectively operate and maintain new e-government systems, adequately oversee related contractor support, and deliver responsive service to the public. OMB will also be challenged in its oversight role of ensuring that agencies comply with GPEA. As I mentioned, OMB’s initial guidance issued in May 2000 prescribed policies and procedures for agencies to follow in implementing the act. For example, the guidance states that agencies should prioritize GPEA implementation based on achievability and net benefit. A number of the prescribed procedures were focused on agencywide strategic actions, such as examining business processes that might be revamped to employ electronic documents, forms, or transactions; identifying customer needs and demands as well as the existing risks associated with fraud, error, or misuse; and evaluating electronic signature alternatives, including risks, costs, and practicality. However, the GPEA implementation plans submitted by federal agencies do not provide sufficient information with which to assess whether agencies have been engaging in these processes. While OMB’s subsequent July reporting guidance called for a brief cover letter describing an agency’s overall strategy and actions to comply with the act, it did not stipulate a full report on the variety of strategic activities and other tasks that agencies were expected to perform, and their schedules for carrying them out. Further, the format prescribed for the information-collection data tables does not provide for any indication of whether electronic implementation has been prioritized based on achievability and net benefit. OMB may wish to consider whether a more comprehensive agency status report is necessary in order to gain better insight into agencywide GPEA planning. Specifically, agencies could be asked to report on the status of the specific tasks outlined in OMB’s May 2000 guidance, and provide milestones for completing tasks that are still underway. This would allow OMB to better assess whether individual agencies are likely to achieve the objectives of the act. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the Committee may have at this time. For information about this testimony, please contact me at (202) 512-6408 or by e-mail at [email protected]. Individuals making key contributions to this testimony include Felipe Colón, Jr., John de Ferrari, Steven Law, Juan Reyes, Elizabeth Roach, Jamelyn Smith, and Yvonne Vigil. (310422)
The Government Paperwork Elimination Act (GPEA) requires that by 2003 federal agencies provide the public the option of submitting, maintaining, and disclosing required information--such as employment records, tax forms, and loan applications--electronically, instead of on paper. In October 2000, federal agencies submitted GPEA implementation plans to the Office of Management and Budget (OMB), which is responsible for executive branch oversight of GPEA. The plans submitted by the the Department of the Treasury and the Environmental Protection Agency (EPA) generally provide the kind of information that was specified in OMB's July 2000 guidance. However, the Department of Defense's (DOD) plan did not describe the department's overall GPEA strategy and, in some cases, the data provided for specific information collections may be inaccurate, incomplete or duplicative. Officials at all three agencies said that they faced challenges in complying with GPEA, particularly with regard to implementing adequate security assurances for sensitive electronic transactions and in planning for and implementing computer network infrastructures. Furthermore, OMB will be challenged in providing oversight of agency GPEA activities because the plans submitted by the agencies go not document key strategic actions, nor do they specify when they will be undertaken. Taken in isolation, the plans do not provide enough information to assess agencies' progress in meeting the objectives of the act. OMB may wish to require agencies to report on major agencywide activities, including specific planned tasks and milestones and the rationale for adopting them.
Since 1824 the Corps has been responsible for constructing and maintaining a safe, reliable, and economically efficient navigation system. Today, this system is comprised of more than 12,000 miles of inland waterways, 300 large commercial harbors, and 600 small harbors. From fiscal years 1998 through 2002, the Corps has removed an average of about 265 million cubic yards of material each year from the navigable waters of the United States, at an average annual cost of about $856 million (in constant 2002 dollars). Private industry performs most of the overall dredging, except for the work done by hopper dredges, in which both the Corps and industry perform a significant amount of the work. Of the $856 million spent annually on overall dredging, about $197 million is spent on all hopper dredging (both maintenance and new construction), with industry vessels accounting for about $148 million annually and Corps vessels accounting for about $49 million. Each of the Corps’ hopper dredges typically operates in a specific geographic area. The Wheeler, a large-class dredge, usually operates in the Gulf of Mexico. The McFarland, a medium-class dredge, usually operates in the Atlantic and Gulf of Mexico. The Essayons, a large-class dredge, and the Yaquina, a small-class dredge, typically work along the Pacific coast. Legislation enacted in the 1990s sought to further increase the role of industry in hopper dredging by placing operational restrictions on the Corps’ hopper dredges. Specifically, the Energy and Water Development Appropriations Act for fiscal year 1993 and subsequent appropriations acts required the Corps to offer for competitive bidding 7.5 million cubic yards of hopper dredging work previously performed by the federal fleet. Since fiscal year 1993, the Corps has addressed this requirement by reducing the use of each of its four dredges from about 230 workdays per year to about 180 workdays per year. The Water Resources Development Act for fiscal year 1996 required the Corps to initiate a program to increase the use of private hopper dredges principally by taking the Wheeler out of active status and placing it into ready reserve. The Corps implemented this requirement by allowing the Wheeler to work 55 days a year plus emergencies (which includes urgent and time-sensitive dredging needs). The 1996 act did not alter the Corps’ duty to implement the dredging program in the manner most economical and advantageous to the United States, and it restricted the Corps’ authority to reduce the workload of other federal hopper dredges. The conference report that accompanied the act directed the Corps to periodically evaluate the effects of the ready reserve program on private industry and on the Corps’ hopper dredge costs, responsiveness, and capacity. The Energy and Water Appropriations Act for fiscal year 2002 placed another restriction on the use of the Corps’ dredge McFarland, limiting it to emergency work and its historical scheduled maintenance in the Delaware River (about 85 workdays per year). Taken together, these restrictions have increased private industry’s share of the hopper dredging workload. In theory, restrictions on the use of the Corps’ hopper dredges could generate efficiency and cost-savings benefits to both government and industry. For example, restricting the Corps’ hopper dredges to fewer scheduled workdays could make them more available to respond to emergency dredging needs. In addition, the increase in demand for dredging by private industry could lead to improvements in dredging efficiency. If achieved, firms might be able to dredge the same amount of material at a lower cost or more material at the same cost. Furthermore, if more work were provided to the private hopper dredging industry, competition could increase if the existing dredging firms expanded their fleets or more firms entered the market. Consequently, the prices that the government pays to contractors could fall. However, economic principles also suggest that if an industry is given more work without increasing capacity or the number of competing firms, prices could rise because the demand for its services has increased. The Corps’ and private industry’s respective roles in the hopper dredging market have changed since legislation enacted in 1978 prompted a movement toward privatization of hopper dredging in the United States. Since that time, the Corps has gradually reduced its hopper dredging fleet from 14 to 4 vessels, while a private hopper dredging industry of five firms and 16 vessels has emerged. Corps officials and representatives from the dredging industry, selected ports, and the maritime industry generally agreed that the Corps needs to retain at least a small hopper dredge fleet to (1) provide additional dredging capacity during peak demand years, (2) meet the emergency and national defense needs identified in the 1978 legislation, and (3) provide an alternative work option at times when the industry offers unreasonable bids or no bids at all. To determine the reasonableness of private contractor bids, the Corps develops a government cost estimate for its hopper dredging solicitations. If the low bid is no more than 25 percent above the government cost estimate, the Corps awards the contract. If all bids exceed the government cost estimate by more than 25 percent, the Corps may pursue a number of options, including performing the work itself. The practical value of this protection against high bids, however, has been limited by the Corps’ use of some outdated contractor cost information and its continued use of an expired policy to calculate transit costs. Before 1978, the Corps performed all of the nation’s hopper dredge work. In 1978, the Congress passed legislation to encourage private industry participation in all types of dredging and required the Corps to reduce the fleet of federal vessels to the minimum necessary for national defense and emergency purposes, as industry demonstrated its capability to perform the work. According to the Senate committee report associated with the 1978 legislation, one of the law’s main purposes was to provide incentives for private industry to construct new hopper dredges. Between 1978 and 1983, as a private hopper dredging industry began to emerge, the Corps reduced its hopper dredge fleet from 14 to its current 4 vessels. By the late 1980s, the Corps stopped assigning its hopper dredges to new construction projects (primarily channel deepening), leaving this work entirely to private industry. Both Corps and private industry hopper dredges continue to perform maintenance work on existing channels. From fiscal years 1998 through 2002, the Corps’ dredges performed about 28 percent of the nation’s hopper dredging maintenance work, annually dredging about 16 million cubic yards of material at a cost of about $49 million (in constant 2002 dollars). During the same period, industry dredges performed about 72 percent of the nation’s hopper dredging maintenance work, dredging about 40 million cubic yards of material annually, at a cost of about $93 million. As a result of the 1978 legislation, seven firms emerged to compete for the Corps’ hopper dredging contracts. Consolidation and firm buy-outs in the 1990s have left five firms in today’s market. (Appendix II contains a more detailed description of the U.S. hopper dredge fleet.) Corps officials and representatives from the dredging industry, selected ports, and the maritime industry generally agreed that the Corps’ hopper dredge fleet currently (1) provides additional dredging capacity during peak demand years, (2) meets emergency dredging and national defense needs identified in the 1978 legislation, and (3) provides an alternative work option when industry provides no bids or when its bids exceed the government cost estimate by more than 25 percent. In addition, representatives of selected ports and the maritime industry generally supported the Corps’ retention and operation of a federal hopper dredge fleet to ensure that dredging needs are met in a timely manner. One of the reasons for the Corps to maintain a hopper dredge fleet is that changes in annual weather patterns, such as El Niño, and severe weather events, such as hurricanes and floods, can create a wide disparity in the demand for hopper dredging services from year to year. During fiscal year 1997 the Corps and private industry used their hopper dredges for maintenance work to remove almost 77 million cubic yards nationwide. In contrast, during fiscal year 2000 they removed about 50 million cubic yards. (See fig. 2.) Hopper dredging needs at the mouth of the Mississippi River are particularly variable from year to year, with annual dredging requirements ranging from 2 million to 50 million cubic yards. Representatives from private dredging firms maintain that industry is not likely to build the additional capacity needed to meet demand in peak years. Corps officials and representatives from the dredging industry, selected ports, and the maritime industry generally agreed that the federal government should provide the additional dredging capacity required to meet the needs of peak demand years. The Corps’ hopper dredges are also needed to respond to emergency dredging assignments. For example, according to a Corps official, it was necessary for the Corps to send the Essayons to finish work on a project in Alaska that was critical to complete before the winter season and freezing conditions set in. In addition, Corps vessels have been used during instances where industry has submitted no bids in response to solicitations. For example, when rains in the Mississippi River Basin caused a build-up of material in navigation channels, the Corps issued a solicitation, but no bids were received because industry vessels were unavailable. Consequently, the Wheeler was used to perform the work. In such situations, the Corps’ fleet acts as insurance to meet dredging needs, ensuring that shipping patterns are not adversely affected. The existence of the Corps’ fleet theoretically offers a measure of protection against inordinately high bids from private contractors. While the private dredging market consists of 16 dredges owned by five firms, not all dredges compete for any given solicitation because (1) some, if not most, hopper dredges are committed to other jobs; (2) hopper dredges may be in the shipyard; (3) differences in hopper dredge size and capability mean that not all hopper dredges are ideally suited to perform the work for a particular job; and (4) hopper dredges cannot quickly move from one dredging region to another. For example, large hopper dredges may have difficulty maneuvering in small inlet harbors, and small hopper dredges may be inefficient at performing large projects with distant disposal sites. Thus, the Corps’ hopper dredge fleet provides an alternative dredging capability that can be brought to bear when private dredges are not available or when private industry bids are deemed too high. The Corps’ government cost estimate for hopper dredging work is pivotal in determining the reasonableness of private contractor bids. The Corps is required to determine a fair and reasonable estimate of the costs for a well-equipped contractor to perform the work. By law, the Corps may not award a dredging contract if the price exceeds 25 percent of the government estimate. In such cases, the Corps has several options. It can (1) cancel the solicitation, (2) readvertise the solicitation, (3) consider challenges to the accuracy of the Corps’ cost estimate by bidders, (4) convert the solicitation into a negotiated procurement, or (5) use one of its own dredges to do the work. The accuracy of the Corps’ cost estimate depends on having access to up- to-date cost information. Although the Corps adjusts contractor cost data annually to reflect current pricing levels, this step does not account for fundamental changes, such as an industry vessel reaching the end of its depreciable life or industry acquisition of new vessels. The Corps has not obtained comprehensive industrywide contractor cost information since 1988. Since then, contractors have provided the Corps updated cost information to support specific costs included in the Corps’ cost estimates that they believe to be outdated, but they are not required to provide updated information for all costs. As a result, the Corps only has updated cost information that contractors provide. In our discussions with Corps officials, they acknowledged the need to initiate an effort to obtain and verify current cost data for industry vessels. In addition, the Corps continues to follow an expired policy when calculating contractor transit costs to the dredge site, further calling into question the accuracy of the government cost estimates. The Corps’ Engineering Regulation 1110-2-1300, which called on the Corps to calculate industry transit costs to the dredge site based on the location of the second-closest industry dredge, expired in 1994. However, the Corps continues to use this method when calculating transit costs for at least some of its solicitations. For example, Corps officials followed the expired policy when demonstrating to us how they calculated the transit costs for a solicitation in Washington State. In this case, the second- closest industry dredge was located in the Gulf of Mexico, and the estimated transit costs amounted to about $480,000 because the vessel would have had to travel thousands of miles and go through the Panama Canal. However, the private contractor’s dredge that performed the work was located fewer than 500 miles from the dredge site, for which the transit costs were estimated to be about $100,000. After bringing this issue to the Corps’ attention, the Corps told us that it plans to reexamine its transit cost policies. Restrictions on the Corps’ hopper dredge fleet, which began in fiscal year 1993, have imposed costs on the Corps’ dredging program, but have thus far not resulted in proven benefits. Most of the costs of the Corps’ hopper dredges are incurred regardless of how frequently the dredges are used. A possible benefit of the restrictions is that they could eventually encourage more firms to enter the market or existing firms to add capacity, which, in turn, may promote competition, improve dredging efficiency, and thus reduce prices. Although there has been an increase in the number of private industry hopper dredges since the restrictions were first imposed, the number of private firms in the hopper dredging market has decreased. In addition, during the same time period, the number of contractor bids per Corps solicitation has decreased, while the number of winning bids exceeding the Corps’ cost estimate has increased. Restrictions on the Corps’ vessels could also potentially enhance the Corps’ responsiveness to emergency dredging needs. However, the Corps is unable to evaluate whether emergency dredging needs have been met more or less efficiently since the restrictions went into effect because it does not specifically identify and track emergency work performed by either Corps or industry vessels. The Corps incurs many of the costs for maintaining and operating its hopper dredges regardless of how much the vessels are used. Thus, as shown in table 1, when the Wheeler was placed in ready reserve and restricted to 55 workdays plus emergencies, the average number of days it worked per year and its productivity (measured by cubic yardage dredged) declined by about 56 percent, while its costs declined by only 20 percent. Crew size declined by about 21 percent, but payroll costs declined by just 2 percent because dredging needs required the Corps to pay the smaller crew overtime to finish the work. In addition, fuel costs did not drop in proportion to use and productivity because the vessel’s engines were utilized for shipboard services (e.g., electricity) while it remained at the dock—a necessary procedure for maintaining minimal vessel readiness. Other costs unrelated to crew or fuel represent the plant or capital costs of a dredge, many of which the Corps incurs regardless of how much a dredge is used. The Corps refers to the difference between a vessel’s total costs and the value of the dredging services it provides (the net cost) as a “subsidy.” The Corps estimates the annual subsidy to maintain the Wheeler idle in ready reserve at about $8.4 million. This subsidy is a direct cost of ready reserve. In addition to the subsidy, the Corps must pay contractors to do the work the Wheeler no longer performs. The difference between the vessel’s traditional workload and its current workload is approximately 6.6 million cubic yards. Depending on whether private industry hopper dredges are able to perform this work in aggregate at a lower or higher cost than if the Wheeler performed the work, the total cost to government of the Wheeler in ready reserve status could be either lower or higher than the Corps’ estimated subsidy. In addition to the Wheeler’s subsidy, restrictions have led to inefficient operations for the other Corps hopper dredges, resulting in additional costs for the Corps. According to Corps officials, September is the ideal time to dredge in the Pacific Northwest, because dredging conditions generally deteriorate in October. The officials mentioned that, at times, the Essayons and the Yaquina have reached their fiscal year operating limits and returned to port in September, before the projects they were working on were complete. The dredges were sent back to complete the project after the new fiscal year began in October, even though weather conditions may have made dredging conditions less than optimal, and the Corps incurred additional transit costs. According to some Corps officials, the annual operating limit cannot be extended. For example, the Essayons stopped dredging the mouth of the Columbia River and returned to port at the end of fiscal year 2001 when it reached its operating limit. The vessel returned to finish the work at the start of the new fiscal year, but adverse weather conditions prevented it from fully dredging the river. As a result, some projects may be postponed until the following fiscal year, reprioritized, or canceled altogether. A potential benefit of the restrictions on the Corps’ hopper dredge fleet is that an increase in demand for industry’s dredging services could encourage existing firms to make capital investments (e.g., build new dredges or improve existing dredges) or encourage more firms to enter the dredging market. Dredging industry representatives told us that the restrictions have already led to an increase in the number of industry vessels and, as evidence, pointed to the addition of two new dredges, the Liberty Island, a large-class dredge introduced in 2002, and the Bayport, a medium-class dredge introduced in 1999, as well as the return of the Stuyvesant, a large-class dredge, to the U.S. hopper dredging market. Moreover, they added that since the restrictions, the private hopper dredging industry has also made improvements and enhancements to its existing fleet—specifically the Columbia—thus improving the efficiency of its dredging operations and increasing the capacity of its vessels. However, the representatives also told us that the restrictions are only one of several factors the private hopper dredging industry considers when deciding to acquire or build an additional dredge. In addition, firms must invest in equipment to remain competitive in any industry. As a result, it is unclear to what extent the restrictions on the Corps’ vessels were a factor in industry’s investment decisions to increase its fleet size and add dredging capacity. While the private hopper dredging industry has recently placed two new dredges on line, it has sold the small-class dredge Mermentau and placed another small-class dredge, the Northerly Island, up for sale. In addition, during the last decade the private hopper dredging industry has decreased from seven firms to five firms. Specifically, since 1993, two firms exited the market, one firm entered the market, and two firms merged. The consolidation in the industry does not necessarily mean that competition has been reduced because the new industry structure could have resulted in enhanced capacity, flexibility, and efficiency for the remaining firms. However, it is uncertain whether the private hopper dredging industry is more or less competitive now than it was prior to the restrictions. Historical data reveal that, in general, as shown in figure 3, in years when more material is available to private industry, industry submits fewer bids per Corps solicitation. For example, during fiscal year 1991, when the Corps estimated that 31.3 million cubic yards of maintenance material would be contracted out to industry, the average number of bids per solicitation was 3.2. In contrast, during fiscal year 1998, when the Corps estimated that 53.7 million cubic yards of maintenance material would be contracted out to industry, industry submitted an average of about 2 bids per solicitation. Likewise, as shown in figure 4, in years when there were fewer industry bids per Corps solicitation, the average winning industry bid, as a percentage of the Corps’ cost estimate, was higher. For example, during fiscal year 1991, when the average number of bids per solicitation was 3.2, the average winning bid was 79 percent of the Corps’ estimate. In contrast, during fiscal year 1998, when the average number of bids per solicitation was 2, the average winning bid was 97 percent of the Corps’ estimate. In general, when there are fewer industry bids per solicitation, the winning industry bid relative to the Corps’ cost estimate increases. In fiscal years 1990 through 2002, more than half of the solicitations for hopper dredging maintenance work received just one or two bids from private contractors. During these years, when only one contractor bid on a solicitation, the bid exceeded the government estimate 87 percent of the time. In contrast, when there were three or more bids on a solicitation, the winning bid exceeded the government estimate only 22 percent of the time. After the Corps’ hopper dredge fleet was effectively restricted to 180 workdays (fiscal years 1993 through 2002), the number of industry bids per solicitation declined from about 3 to roughly 2.4. Specifically, as shown in figure 5, when there were no limits on the use of the Corps’ hopper dredges (fiscal years 1990 through 1992), only 5 percent of solicitations received one bid. After limits were placed on the Corps’ hopper dredges (fiscal years 1993 through 2002), 19 percent of solicitations had only one bid. Moreover, before the restrictions, 67 percent of the solicitations had three or more bids, whereas, after the restrictions, only 42 percent had three or more bids. These changes might have been expected because, after the restrictions, industry’s share of hopper dredging work increased while the number of hopper dredging firms decreased from seven to five. Furthermore, in the time period following the imposition of the 180-day restriction, the frequency with which the winning industry bid exceeded the Corps’ cost estimate has increased. For example, as shown in figure 6, prior to the restrictions, the winning bid exceeded the Corps’ cost estimate 24 percent of the time. After the restrictions were imposed, the winning bid exceeded the Corps’ estimate 45 percent of the time. This finding is consistent with economic principles; that is, all else equal, an increase in demand for dredging by private industry with fixed supply would result in higher prices. It should be noted that the extent to which the restrictions contributed to the decrease in the number of industry bids per Corps solicitation and the increase in the winning industry bid relative to the Corps’ cost estimate is unknown. Other factors could have also contributed to these changes. For example, an increase in the demand for hopper dredging services for new construction projects or beach nourishment could lead to a decrease in the number of bids received for maintenance projects. Similarly, the introduction of environmental restrictions on when hopper dredging can take place could contribute to an increase in the winning industry bid relative to the Corps’ cost estimate. Nevertheless, the decrease in the number of bids per solicitation and the increase in bids exceeding the Corps’ cost estimates raises questions about the effects the restrictions may have had on competition and prices, demonstrating the need for a comprehensive analysis of the effects of the restrictions on competition, efficiency, and prices. Another potential benefit of restrictions on the use of the Corps’ vessels is enhanced responsiveness to emergencies. However, there is disagreement within the Corps on this issue. One Corps official believes that a dredge in ready reserve is better able to handle emergencies than if it were working 180 days because it is in a “standby” status at the dock, ready to respond. In contrast, others in the Corps believe that a dredge can respond just as well or better to an emergency while working a full schedule because the dredge can temporarily halt the project it is working on, respond to the emergency, and then return to its scheduled work. During our discussions with representatives from selected ports and the maritime industry, we did not learn of any instances of problems in the Corps’ responsiveness to emergencies prior to restrictions or instances of improved response time since the restrictions went into effect. A major reason that the Corps is unable to evaluate whether emergency dredging needs have been met more or less efficiently since the restrictions went into effect is that its dredging database—the Dredging Information System—does not specifically identify and track emergency work performed either by Corps or industry vessels. Consequently, the Corps cannot readily determine how many days have been needed for each of its vessels to respond to emergencies. In addition, the Corps does not know whether it is paying contractors more or less for performing the emergency dredging projects compared to the costs it pays for routinely scheduled maintenance work. Such information would be a valuable tool for determining how emergency dredging needs can be met in a manner that is the most economical and advantageous to the government—that is, when and under what circumstances to contract with the private hopper dredging industry for these emergencies or when to use Corps vessels. In discussing this issue, Corps officials agreed that obtaining information on emergencies is important for managing their hopper dredging program and told us they have initiated efforts to collect such data to incorporate into their dredging database. In a June 2000 report to the Congress, the Corps stated that the placement of the Wheeler in ready reserve had been a success and recommended that the vessel remain in ready reserve. However, the report contained a number of analytical and evidentiary shortcomings, and, when asked, the Corps could not provide any supporting documentation for its recommendation. In addition, the report also proposed that the McFarland be placed in ready reserve, but the Corps did not conduct an analysis to support this proposal. The costs to place the McFarland in ready reserve are likely to be similar to the costs incurred by placing the Wheeler in ready reserve. Because the McFarland’s workload would be reduced from 180 days to 55 days plus emergencies, the Corps would incur annual costs of about $8 million when the vessel is idle—largely because much of a vessel’s costs are incurred regardless of its level of use. Furthermore, according to the Corps, the McFarland will require at least a $25 million capital investment to ensure its safety, operational reliability, and effectiveness for future service. It is questionable whether such an investment in a vessel that would be placed in ready reserve and receive only minimal use is in the best interest of the government. The Water Resources Development Act for 1996 required the Corps to determine whether (1) the Wheeler should be returned to active status or continue in ready reserve status or (2) another federal hopper dredge should be placed in ready reserve status, and issue a report to the Congress on its findings. The Corps issued the required report in June 2000, recommending that the Wheeler remain in reserve and proposing that an additional dredge, the McFarland, also be placed in reserve. However, when asked, the Corps official who authored the report told us that he did not have any supporting documentation for the report. In addition, the report had a number of evidentiary and analytical shortcomings. For example, the evidence presented in the report showed that the price the government paid to the industry for hopper dredging was higher in the 2 years after the Wheeler was put in ready reserve than it was the year before. This raises questions about the validity of the recommendation contained in the report. Furthermore, the report did not contain a comprehensive analysis. A comprehensive economic analysis of a government program or policy would identify all the resulting costs and benefits, and, where possible, quantify these measures. Both the quantitative and qualitative costs and benefits would need to be compared and evaluated to determine the success or failure of a program and to potentially be used as a basis for future policy decisions. With regard to the restrictions on the Corps’ hopper dredges, a comprehensive economic analysis might contain, among other things, all costs associated with the nonuse of the vessel and the potential benefits that might result due to efficiency gains, increased competition, and lower prices. The analysis might also examine whether ports, harbors, and access channels were maintained more or less effectively, or whether emergency dredging needs were met in a more or less timely and cost-effective manner following implementation of the restrictions. The Corps has not demonstrated that placing an additional hopper dredge in ready reserve, specifically the McFarland, would be beneficial to the United States. In its June 2000 report to the Congress on the ready reserve status of the dredge Wheeler, the Corps proposed that the McFarland be the next dredge placed in reserve. However, the Corps did not offer any analysis on the potential costs of placing an additional Corps hopper dredge in reserve or the benefits of such an action. Moreover, to be available for future use, the 35-year-old McFarland requires at least a $25 million capital investment to ensure its safety, operational reliability, and effectiveness. The repairs include asbestos removal; repairs to the hull; engine replacement; and upgrades of equipment, machinery, and other shipboard systems. It is questionable whether spending $25 million to rehabilitate the McFarland and then placing it in ready reserve is prudent. Furthermore, if the McFarland were placed in ready reserve, the Corps would incur annual costs similar to the subsidy that is already incurred for the Wheeler. Because the Wheeler’s costs do not vary proportionally to its use, the costs to operate the vessel 55 days a year plus emergencies in ready reserve is only marginally less than if it were to operate 180 days a year. The Corps estimates that if the McFarland were placed in ready reserve, it would require an annual subsidy of about $8 million to remain idle. The Corps would also need to contract out the work the McFarland would no longer be doing—approximately 2 to 3 million cubic yards per year. Depending on whether private industry hopper dredges are able to perform this work in aggregate at a lower or higher cost than if the McFarland performed the work, the total cost to government of the placing the McFarland in reserve could be either lower or higher than the estimated annual subsidy. Finally, placing the McFarland in ready reserve could increase competition if such restrictions spurred an increase in investment in private hopper dredges. However, it is questionable whether placing the McFarland in ready reserve would provide enough incentive for industry to make additional investments. Hopper dredges play a critical role in keeping the nation’s ports open for both domestic and international trade. This function has been and will likely continue to be carried out through a mix of private industry and government-owned dredges. At issue is how to use this mix of dredges in a manner that maintains the viability of the private fleet while minimizing the costs to government. The Corps has proposed to the Congress that additional restrictions on the use of its hopper dredges are warranted, but it cannot provide any analytical evidence to support its position. The limited evidence that does exist indicates that these restrictions have imposed costs on the government, while the benefits are largely unproven. Unless and until the Corps gathers the data, comprehensively analyzes the costs and benefits of restrictions on the use of its hopper dredges, and takes the steps to update its cost estimates, there is no assurance that the nation’s hopper dredging needs are being met in a manner that is the most economical and advantageous to the government. In an effort to discern the most economical and advantageous manner in which to operate its hopper dredge fleet, and because the Corps has been unable to support, through analysis and documentation, the costs and benefits of placing its hopper dredges in ready reserve, we recommend that the Secretary of the Army direct the Corps of Engineers to obtain and analyze the baseline data needed to determine the appropriate use of the Corps’ hopper dredge fleet including, among other things, data on the frequency, type, and cost of emergency work performed by the Corps and the private hopper dredging industry; contract type; and solicitations that receive no bids or where all the bids received exceeded the Corps’ estimate by more than 25 percent; prepare a comprehensive analysis of the costs and benefits of existing and proposed restrictions on the use of the Corps’ hopper dredge fleet—including limiting the Corps’ dredges to 180 days of work per year, placing the Wheeler into ready reserve, limiting the McFarland to its historic work in the Delaware River, and placing the McFarland into ready reserve status; and assess the data and procedures used to perform the government cost estimate when contracting dredging work to the private hopper dredging industry, including, among other things, (1) updating the cost information for private industry hopper dredges and (2) examining the policies related to calculating transit costs. We provided a draft of this report to the Acting Assistant Secretary of the Army and the Dredging Contractors of America for review and comment. In a letter dated March 21, 2003, the Department of the Army (Army) provided comments on a draft of this report. The Army agreed with our recommendations and provided time frames for implementing each of them. It also provided additional comments suggesting clarification and elaboration on a number of issues discussed in our report. See appendix III for the Army’s comments and our responses. In a letter dated March 3, 2003, the Dredging Contractors of America (DCA) provided detailed comments on a draft of this report. DCA generally agreed with our recommendations. However, it believed strongly that reducing the scheduled use of the Corps’ hopper dredges has resulted in proven benefits. We continue to believe that the relationship between the restrictions and benefits to the government are unproven because (1) the Corps incurs costs related to the underutilization of its dredges, and (2) since the restrictions were first imposed, the Corps has received fewer industry bids per solicitation, and the percentage of winning industry bids that exceed the Corps’ cost estimates has increased. See appendix IV for DCA’s comments and our responses. We conducted our review between January 2002 and February 2003 in accordance with generally accepted government auditing standards. A detailed discussion of our scope and methodology is presented in appendix I. We will send copies of the report to the Secretary of the Army, appropriate congressional committees, and other interested Members of Congress. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix V. To assess the changing roles of the Corps and industry in hopper dredging and the characteristics of the hopper dredging industry, we obtained Corps’ studies and data from the Corps’ Navigation Data Center that provided information on the hopper dredging requirements of the United States, including the quantity of material dredged annually by the Corps and the private hopper dredging industry, and their associated costs. We also reviewed the laws that define these roles. In addition, we interviewed Corps officials; representatives from the five hopper dredging firms (B+B Dredging Co., Inc., Bean Stuyvesant LLC, Great Lakes Dredge & Dock Company, Manson Construction Co., and Weeks Marine, Inc.); the maritime industry (the Delaware River Port Authority, Maritime Exchange for the Delaware River and Bay, Navios Ship Agencies, Inc., and the Steamship Association of Louisiana); dredging and port associations (Dredging Contractors of America, Pacific Northwest Waterways Association, and American Association of Port Authorities); and selected ports (Portland, Seattle, New York/New Jersey, New Orleans, and Wilmington). To obtain a better understanding of hopper dredging from the perspective of the private hopper dredging industry, we visited and toured a medium-class industry hopper dredge working in the Chesapeake and Delaware Canal and interviewed its crew. Moreover, we reviewed the Corps’ cost estimating policies. To determine the intent and effects of the restrictions placed on the use of the Corps’ hopper dredge fleet, we analyzed the laws governing the use of the Corps’ hopper dredges. We also reviewed studies conducted by the Corps and the Pacific Northwest Waterways Association. For qualitative information, we obtained documents and interviewed Corps officials from headquarters and district and division offices, including Jacksonville, New Orleans, Philadelphia, Portland, Walla Walla, and the North Atlantic Division, as well as representatives from the private hopper dredging firms, selected ports, dredging and port associations, and the maritime industry. For quantitative information, we performed descriptive statistical analyses using data on the winning contractor bids, estimated industry dredging volumes, and the Corps’ cost estimate available from the Corps’ Dredging Information System database. To evaluate whether further restrictions on the Corps’ hopper dredge fleet, including placing the Corps’ dredge McFarland in ready reserve, are justified, we reviewed studies and analyses performed by the Corps to support its proposal to place the McFarland in ready reserve. We also interviewed officials from the Corps and representatives from the private hopper dredging industry, selected ports, and the maritime industry to gain their views on the possible effects on competition and emergency response if the current restrictions on the Corps’ hopper dredges, particularly the McFarland, were modified. To determine the costs associated with repairing the McFarland, we obtained and analyzed cost estimates for the repairs prepared by the Corps’ Philadelphia district office and discussed the estimates with Corps district and headquarters officials. We also visited and toured the McFarland when it was working in the Delaware River and interviewed the McFarland’s crew and Corps officials from the Philadelphia district and the North Atlantic Division offices. We conducted our review between January 2002 and February 2003 in accordance with generally accepted government auditing standards. There are currently 20 hopper dredges operating in the United States. (See table 2.) Of the 20 dredges, 4 are small-class hopper dredges, 10 are medium-class hopper dredges, and 6 are large-class hopper dredges. Of the 16 private hopper dredges, Great Lakes Dredge & Dock Company owns 7, Manson Construction Co. owns 3, and the remaining firms (B+B Dredging Co., Inc., Bean Stuyvesant LLC, and Weeks Marine, Inc.) each own 2. 1. As discussed in our report, the Corps’ cost estimate is pivotal in determining the reasonableness of private contractors bids, and by law the Corps may not award a contract if the bid price exceeds the cost estimate by more than 25 percent. Consequently, we believe that it is critical for the Corps to have comprehensive data for all costs and all industry vessels. The Army recognized in its comments that the cost information for industry hopper dredges is outdated and needs to be evaluated, and has initiated an effort to improve the cost data. While we recognize that updating the cost data could potentially increase or decrease the Corps’ cost estimates, we believe that unless the Corps has updated cost data for all industry vessels, there is no assurance that the Corps’ cost estimates are a reliable tool for determining whether industry bids are within 25 percent of the government estimate as required by law. The Army’s suggestion of clustering several navigation projects for west coast contracts—similar to the Dredging Contractors of America’s comment numbered 3—is one of several possible options for addressing the costs of moving dredges to and from the west coast region. 2. In our report, we illustrated how a rigid interpretation of the Corps’ policy that limits the number of days its vessel can operate resulted in inefficient operations. We recognize that the Corps’ hopper dredge owning district has the flexibility to schedule the dredge within the maximum allowable number of days. However, because time-sensitive dredging needs may disrupt the scheduled use of the dredge, we believe that it would be prudent for the Corps to examine whether there is a need for some flexibility in implementing the annual operating restrictions on the Corps vessels. As discussed in our report, the Corps incurs many of the costs for maintaining and operating its hopper dredges regardless of how much the vessels are used. While it is true that the Corps would save contracting costs if the river is not shoaling and the work previously performed by the Wheeler does not need to be done, the Corps is still paying money to maintain the Wheeler idle in reserve when the vessel could be working to pay for its costs. We recognize that it is plausible that private industry’s hopper dredging costs could decrease over time if their vessels performed more work. However, more important to the government, is how any potential decrease in industry costs are passed along to the government in the form of lower prices. The data in our report raise questions about whether any cost savings industry has realized have trickled down to the government. The Army’s suggestion regarding a sensitivity analysis is one of many analyses that it may wish to consider in its comprehensive analysis of the costs and benefits of existing and proposed restrictions on the use of the Corps’ hopper dredges. 3. As acknowledged in our report, private industry has increased its hopper dredging capacity. However, the exact change in capacity and the degree to which the capacity increases are attributable to the restrictions on the Corps vessels is uncertain. While it is plausible that the restrictions may have caused industry to make these capital improvements, representatives of the dredging industry told us that the restrictions were one of several factors that they considered before building or acquiring additional vessels, including the construction of the Bayport and the Liberty Island. It is uncertain whether these investments occurred as a result of the restrictions or whether the investments were necessary to remain competitive in the industry. Hypothetically, more vessels and increased capacity should translate to more bids and lower bid prices. However, our analysis showed that the number of industry bids per hopper maintenance dredging solicitation declined from about 3 bids before restrictions to roughly 2.4 bids after restrictions were placed on the Corps vessels. This finding reinforces the need for a comprehensive analysis of the benefits and costs of the restrictions on the Corps’ dredges. 4. The Army’s comment reinforces our concerns about whether the restrictions have resulted in proven benefits. This is one of the issues that should be considered in the comprehensive analysis we are recommending. 5. The Army recognizes the need to update the information being collected by its Dredging Information System and has initiated efforts to address this issue. Obtaining and analyzing such information is an important prerequisite to determine whether all hopper dredging needs, in particular time-sensitive needs, are being met in the manner most cost-effective to the government. While the Army refers to a mechanism they have developed with industry to ensure that time-sensitive and urgent dredging needs are managed, we believe it is premature to claim that the process has resulted in meeting time-sensitive dredging needs in a cost-effective manner. 6. The Army’s comments did not address the lack of supporting documentation for its June 2000 Report to Congress. Instead, the Army reiterated points it has made in its previous comments and raised a number of other issues related to hopper dredging. Until a comprehensive analysis is performed on the benefits and costs of restrictions on the Corps’ hopper dredge fleet, there is no assurance that the Nation’s hopper dredging needs are being met in the manner that is most economic and advantageous to the government. DCA generally agreed with our recommendations. However, DCA strongly believes that reducing the scheduled use of the Corps’ hopper dredges has resulted in proven benefits. DCA stated that available information and data show that benefits have resulted. However, we believe the relationship between the restrictions on the Corps’ hopper dredge fleet and benefits to the government remains unproven. First, the extent to which use restrictions on the Corps’ vessels were a factor in industry’s investment decisions to increase its fleet size and add dredging capacity is unclear. Second, the analysis provided by DCA to support its claim is not persuasive; it covered an insufficient period of time and presented data in a potentially misleading fashion. Specifically, DCA only included data for activities that occurred after the implementation of the first restriction on the Corps’ dredges. We believe that an analysis of the effects of the restrictions should include data covering the period before and after the restrictions because the time period before restrictions establishes the appropriate baseline to compare changes resulting from the restrictions. Discussed below are our corresponding detailed responses to DCA’s nine numbered comments in the three-page attachment to its letter. DCA also provided 21 pages of appendices, which we have not included in this final report because of the length. However, we have considered all of DCA’s comments in our response. 1. We have added language to expand our description of the legislation enacted in 1996 that further increased the role of private industry in hopper dredging. 2. We disagree that the Corps receives adequate, updated contractor cost information through claims and other audit-related activities. As part of this process, industry only provides the Corps updated information to support specific costs that they believe are outdated. They are not required to provide updated information for all costs. In addition, the updated information obtained through claims and other audit-related activities do not ensure that data are collected consistently for each of the vessels. For a vessel involved in multiple claims, the Corps may have more up-to-date costs than a vessel with fewer claims. DCA stated in its comments that current cost information should be used because industry faces increasing labor, fuel, maintenance, and insurance costs. As mentioned in our report, the Corps adjusts estimated costs annually to reflect current price levels. These adjustments, however, do not account for fundamental changes, such as a vessel reaching the end of its depreciable life, which may also affect the cost estimate. For example, according to a Corps official, industry vessels are depreciated over 20 to 25 years. In 2003, 9 of the 16 industry vessels were 20 years or older and thus, may be nearing the end of their depreciable lives. Unless the Corps has updated data for all costs and for all industry vessels, there is no assurance that the Corps’ cost estimates are a reliable tool for determining whether industry’s bids are within 25-percent of the government estimate as required by law. 3. As our report recommends, we believe the Corps should examine its policies related to calculating transit costs. We agree that DCA’s suggestion is one of several possible options for addressing this issue. 4. The extent to which the restrictions on the Corps vessels caused industry to make the investments that DCA cited as proven benefits is unclear. First, representatives of the dredging firms told us the restrictions were only one of several factors they considered before building or acquiring additional vessels, including the construction of the Bayport and Liberty Island. Second, firms must routinely replace and update equipment to remain competitive in any industry. While DCA stated that there was a substantial investment in the Columbia following restrictions, the vessel was originally built in 1944 and designed to transport military equipment during World War II. We believe it is plausible that the restrictions on the Corps’ vessels may have contributed to industry’s investment decisions; however, it is unclear to what extent the restrictions contributed to these decisions. 5. While private industry has added capacity, we question the basis for DCA’s calculation of the exact change in capacity and the degree to which the capacity increases are attributable to restrictions on the Corps’ hopper dredges. Over half of the increase in capacity cited by DCA is attributable to the return of one vessel—the Stuyvesant—to service in the United States. However, the Stuyvesant worked in the United States prior to the restrictions, and thus it is questionable whether this constitutes an increase in capacity. With regard to the portion of capacity increase due to the construction of the Bayport and the Liberty Island, as previously stated in response 4 above, the owners of these vessels said the restrictions were only one of several factors they considered in their decisions to build these two vessels. For these reasons, we believe it is questionable whether the capacity increases cited by DCA are proven benefits of the restrictions. 6. We believe that DCA’s claims are based on incomplete information and can be misleading because its analysis only included data after the implementation of the first restriction in fiscal year 1993. As a result, DCA only examined the marginal effects after the Wheeler was placed in ready reserve, but not the effects of all the restrictions. We believe a more appropriate analysis of the effects of the restrictions would compare data covering the periods before and after all restrictions because the time period before restrictions establishes the appropriate baseline to compare changes resulting from the restrictions. The following example illustrates how not examining the entire time period before and after all restrictions may produce incomplete and misleading results. We found that the percentage of bids less than the Corps’ cost estimate was 55 percent after the fiscal year 1993 restriction went into effect (fiscal years 1993 through 2002) and 58 percent after the Wheeler was placed in reserve (fiscal years 1998 through 2002). This finding is consistent with DCA’s claim, and taken alone could be viewed as an improvement. However, prior to the 1993 restriction (fiscal years 1990 through 1992), 76 percent of the winning bids were less than the Corps’ cost estimate. Thus, although there has been an increase in the percentage of bids less than the Corps’ cost estimate following reserve of the Wheeler, this change is significantly less than what occurred before the restrictions. Furthermore, in an appendix to its comments, DCA criticized our approach of presenting data as averages across a number of years to assess the effects of the restrictions, and argued that a year-to- year evaluation should be used. However, in addition to restrictions on the Corps’ fleet, a number of other factors can lead to changes in the number of bids per solicitation and winning bid relative to the Corps’ cost estimate from one year to the next. For example, high water flows in the Mississippi River can result in high accumulation of material at the mouth of the Mississippi River and increase the demand for time-sensitive dredging requirements. During such periods, the winning bids relative to the Corps’ cost estimate may increase. However, the information necessary to control for these factors is unavailable. For example, the Corps does not collect data on time-sensitive dredging needs. As a result, we believe that presenting changes as averages across a number of years is more appropriate because it mitigates for the annual variability in the factors that can also affect the number of bids per Corps solicitation and winning bid relative to the Corps’ cost estimate. 7. We disagree with DCA’s comment. In fact, the historical data do indicate that, in general, in years when more material is available to industry, industry submits fewer bids per Corps solicitation. The information presented in figure 3 in our report, shows that there is an inverse relationship between the estimated volume of material dredged and the annual bids per solicitation, which is statistically significant at the 95 percent confidence level. 8. DCA agreed that seven companies operated in the U.S. hopper dredging market prior to the fiscal year 1993 restriction, while five companies remain in the market today. However, DCA stated that the number of companies competing on a nationwide basis has increased from four to five in the last 10 years. Regardless of whether dredging firms operated on a regional or national basis, prior to the restrictions seven firms provided hopper dredging services and now there are five firms. Furthermore, as recognized in our report, the consolidation in the industry does not necessarily mean that competition has been reduced because the new industry structure could have resulted in enhanced capacity, flexibility, and efficiency for the remaining firms. Moreover, regardless of the number of firms in the industry, DCA acknowledged that the number of bids is more indicative of competition than merely the number of companies. As stated in our report, the number of industry bids per Corps solicitation has decreased on a nationwide basis from approximately 3 bids in the 3 years prior to the restrictions (fiscal years 1990 through 1992) to roughly 2.4 bids in the period following the restrictions (fiscal years 1993 through 2002). 9. We agree with DCA’s comment, which is already addressed by our recommendations. In addition, Chuck Barchok, Diana Cheng, Richard Johnson, Jonathan McMurray, Ryan Petitte, and Daren Sweeney made key contributions to this report.
The fiscal year 2002 Conference Report for the Energy and Water Development Appropriations Act directed GAO to study the benefits and effects of the U.S. Army Corps of Engineers' (Corps) dredge fleet. GAO examined the characteristics and changing roles of the Corps and industry in hopper dredging; the effect of current restrictions on the Corps' hopper dredge fleet; and whether existing and proposed restrictions on the fleet, including the proposal to place the McFarland in ready reserve, are justified. In addition, GAO identified concerns related to the government cost estimates the Corps prepares to determine the reasonableness of industry bids. In response to 1978 legislation that encouraged private industry participation in dredging, the Corps gradually reduced its hopper dredge fleet from 14 to 4 vessels (the Wheeler, the McFarland, the Essayons, and the Yaquina) while a private hopper dredging industry of five firms and 16 vessels has emerged. Dredging stakeholders generally agreed that the Corps needs to retain at least a small hopper dredge fleet to (1) provide additional dredging capacity during peak demand years, (2) meet emergency dredging needs, and (3) provide an alternative work option when industry provides no bids or when its bids exceed the government cost estimate by more than 25 percent. In reviewing the cost estimation process, GAO found that the Corps' estimates are based on some outdated contractor cost information and an expired policy for calculating transit costs. The restrictions on the use of the Corps' hopper dredge fleet that began in fiscal year 1993 have imposed costs on the Corps' dredging program, but have thus far not resulted in proven benefits. The Corps estimates that it spends $12.5 million annually to maintain the Wheeler in ready reserve, defined as 55 workdays plus emergencies, of which about $8.4 million is needed to cover the costs incurred when the vessel is idle. A possible benefit of restrictions on the Corps' vessels is that they could eventually encourage existing firms to add dredging capacity or more firms to enter the market, which, in turn, may promote competition, improve dredging efficiency, and lower prices. Although there has been an increase in the number of private industry hopper dredges since the restrictions were first imposed, the number of private firms in the hopper dredging market has decreased. In addition, during the same time period, the number of contractor bids per Corps solicitation has decreased, while the number of winning bids exceeding the Corps' cost estimates has increased. Although the Corps proposed that the McFarland be placed in ready reserve, it has not conducted an analysis to establish that this action would be in the government's best interest. Specifically, in a June 2000 report to the Congress, the Corps stated that the placement of the Wheeler in ready reserve had been a success and proposed that the McFarland also be placed in ready reserve. However, when asked, the Corps could not provide any supporting documentation for its report. Furthermore, according to the Corps, future use of the McFarland will require at least a $25 million capital investment to ensure its safety, operational reliability, and effectiveness. Such an investment in a vessel that would be placed in ready reserve and receive only minimal use is questionable.
We assessed 18 projects in NASA’s current portfolio. Four were in the “formulation” phase, a time when system concepts and technologies are still being explored and 14 were in the “implementation” phase, where system design is completed, scientific instruments are integrated, and a spacecraft is fabricated. When implementation begins, it is expected that project officials know enough about a project’s requirements and what resources are necessary to meet those requirements that they can reliably predict the cost and schedule necessary to achieve its goals. Reaching this point requires investment. In some cases, projects that we reviewed spent 2 to 5 years and up to $100 million or more before being able to formally set cost and schedule estimates. Ten of the projects in our assessment for which we received data and that had entered the implementation phase experienced significant cost and/or schedule growth from their project baselines. Based on our analysis, development costs for projects in our review increased by an average of almost 13 percent from their baseline cost estimates—all in just two or three years—including one that went up more than 50 percent. It should be noted that a number of these projects had experienced considerably more cost growth before a baseline was established in response to statutory reporting requirements. Our analysis also shows that projects in our review had an average delay of 11 months to their launch dates. We found challenges in five areas that occurred throughout the various projects we reviewed that can contribute to project cost and schedule growth. These are not necessarily unique to NASA projects and many have been identified in other weapon and space systems that we have reviewed and have been prevalent in the agency for decades. Technology maturity. Four of the 13 projects in our assessment for which we received data and that had entered the implementation phase did so without first maturing all critical technologies, that is they did not know that technologies central to the project’s success could work as intended before beginning the process of fabricating the spacecraft. This means that knowledge needed to make these technologies work remained unknown well into development. Consequences accrue to projects that are still working to mature technologies well into system development, when they should be focusing on maturing system design and preparing for production. Simply put, projects that start with mature technologies experience less cost growth than those that start with immature technologies. Design stability. The majority of the projects in our assessment that held a critical design review did so without first achieving a stable design. If design stability is not achieved, but product development continues, costly re-designs to address changes to project requirements and unforeseen challenges can occur. All of the projects in our assessment that had reached their critical design review and that provided data on engineering drawings experienced some growth in the total number of design drawings after their critical design review. Growth ranged from 8 percent to, in the case of two projects, well over 100 percent. Some of this increase can be attributed to change in system design after critical design review. Complexity of heritage technology. More than half the projects in the implementation phase—eight of them—encountered challenges in integrating or modifying heritage technologies. Additionally, two projects in formulation—Ares I and Orion—also encountered this problem. We found that the projects that relied on heritage technologies underestimated the effort required to modify them to the necessary form, fit, or function. Contractor performance. Six of the seven projects that cited contractor performance as a challenge also experienced significant cost and/or schedule growth. For example, through our discussions with the project offices, we were informed that contractors encountered technical and design problems with hardware that disrupted development progress. Development partner performance. Five of the thirteen projects we reviewed encountered challenges with a development partner. In these cases, the development partner could not meet its commitments to the project within planned timeframes. This may have been a result of problems within the specific development partner organization or as a result of problems faced by a contractor to that development partner. The challenges we identified in the NASA assessment are similar to ones we have identified in other weapon systems, including Defense space systems. We testified last year that DOD space system cost growth was attributable to programs starting before they have assurance that capabilities being pursued can be achieved within available resources and time constraints. For example, DOD’s National Polar-orbiting Operational Environmental Satellite System (NPOESS) has increased in cost from roughly $6 billion to over $11 billion because of challenges with maturing key technologies. We have also tied acquisition problems in space systems to inadequate contracting strategies and contract and program management weaknesses. Further, we issued a report in 2006 that found DOD space system cost estimates were consistently optimistic. For example, DOD’s Space-Based Infrared High program was originally expected to cost about $4 billion and is now expected to cost more than $10 billion. We have found these problems are largely rooted in the failure to match the customer’s needs with the developer’s resources—technical knowledge, timing, and funding—when starting product development. In other words, commitments were made to achieving certain capabilities without knowing whether technologies and/or designs being pursued could really work as intended. Time and costs were consistently underestimated. As we have discussed in previous work on space systems at both DOD and NASA, a knowledge-based approach to acquisitions, regardless of the uniqueness or complexity of the system is beneficial because it allows program managers the opportunity to gain enough knowledge to identify potential challenges earlier in development and make more realistic assumptions about what they can achieve. NASA has also taken significant steps to improve in the high risk area of acquisition management. For example, NASA revised its acquisition and engineering polices to incorporate elements of a knowledge-based approach that should allow the agency to make informed decisions. The agency is also instituting a new approach whereby senior leadership is reviewing acquisition strategies earlier in the process and has developed broad procurement tenets to guide the agency’s procurement practices. Further, NASA is working to improve management oversight of project cost, schedule, and technical performance with the establishment of a baseline performance review with senior management. In order to improve it’s contracting and procurement process, NASA has instituted an agency wide standard contract-writing application intended to ensure all contracts include the most up-to-date NASA contract clauses and to improve the efficiency of the contracting process. NASA is also requiring project managers to quantify the program risks they identify and collect more consistent data on project cost and technologies. It is taking other actions to enhance cost estimating methodologies and to ensure that independent estimates are used. These changes brought the policy more in line with best practices for product development. However, as we previously reported, NASA lacks defined requirements across centers and mission directorates for consistent metrics that demonstrate knowledge attainment through the development cycle. In order for a disciplined approach to take hold, we would expect project officials across the agency to be held accountable for following the same required policies. More steps also need to be taken to manage risk factors that NASA believes are outside of its control. NASA asserts that contractor deficiencies, launch manifest issues, partner performance, and funding instability are to blame for the significant cost and schedule growth on many of its projects that we reviewed. Such unforeseen events, however, should be addressed in project-level, budgeting and resource planning through the development of adequate levels of contingency funds. NASA cannot be expected to predict unforeseen challenges, but being disciplined while managing resources, conducting active oversight of contractors, and working closely with partners can put projects in a better position to mitigate these risks should they occur. Realistically planning for and retiring technical or engineering risks early in product development allows the project to target reserves to issues NASA believes are outside of its control. In conclusion, managing resources as effectively and efficiently as possible is more important than ever for NASA. The agency is undertaking a new multi-billion dollar program to develop the next generation of spacecraft for human spaceflight and at a time when it is faced with increasing demands to support important scientific missions, including the study of climate change, and to increase aeronautics research and development. By allowing major investment commitments to continue to be made with unknowns about technology and design readiness, contractor capabilities, requirements, and/or funding, NASA will merely be exacerbating the inherent risks it already faces in developing and delivering new space systems. Programs will likely continue to experience problems that require more time and money to address than anticipated. Over the long run, the extra investment required to address these problems may well prevent NASA from pursuing more critical science and space exploration missions. By contrast, by continuing to implement its acquisition management reforms and ensuring programs do not move forward with such unknowns, NASA can better align customer expectations with resources, minimize problems that could hurt programs, and maximize it ability to meet increased demands. Madam Chairwoman, this concludes my statement. I will be happy to answer any questions that you have. For additional information, please contact Cristina Chaplain at 202-512- 4841 or [email protected]. Individuals making contributions to this testimony include Jim Morrison, Assistant Director; Shelby S. Oakley, Assistant Director; Greg Campbell; Richard A. Cederholm; Brendan S. Culley; Deanna R. Laufer; Kenneth E. Patton; and Letisha T. Watson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the National Aeronautics and Space Administration's (NASA) oversight and management of its major projects. As you know, in 1990, GAO designated NASA's contract management as high risk in view of persistent cost growth and schedule slippage in the majority of its major projects. Since that time, GAO's high-risk work has focused on identifying a number of causal factors, including antiquated financial management systems, poor cost estimating, and undefinitized contracts. Because cost growth and schedule delays persist, this area - now titled acquisition management because of the scope of issues that need to be resolved - remains high risk. To its credit, NASA has recently made a concerted effort to improve its acquisition management. In 2007, NASA developed a comprehensive plan to address systemic weaknesses related to how it manages its acquisitions. The plan specifically seeks to strengthen program/project management, increase accuracy in cost estimating, facilitate monitoring of contractor cost performance, improve agency wide business processes, and improve financial management. While we applaud these efforts our recent work has shown that NASA needs to pay more attention to effective project management. It needs to adopt best practices that focus on closing gaps in knowledge about requirements, technologies, funding, time and other resources before it makes commitments to large-scale programs. For instance, the Mars Science Laboratory, which was already over budget, recently announced a 2-year launch delay. Current estimates suggest that the price of this delay may be $400 million--which drives the current project life-cycle cost estimate to $2.3 billion; up from its initial confirmation estimate of $1.6 billion. Also, in just one year, the development costs of NASA's Glory mission increased by 54 percent, or almost $100 million, because of problems NASA's contractor is having developing a key sensor. Total project costs for another project, Kepler have increased almost another $100 million within 2 fiscal years because of similar issues. Taken together, these and other unanticipated cost increases hamper NASA's ability to fund new projects, continue existing ones, and pave the way to a post-shuttle space exploration environment. Given the constrained fiscal environment and pressure on discretionary spending it is critical that NASA get the most out of its investment dollars for its space systems. The agency is increasingly being asked to expand its portfolio to support important scientific missions including the study of climate change. Therefore, it is exceedingly important that these resources be managed as effectively and efficiently as possible for success. The recent launch failure of the Orbiting Carbon Observatory is an all-too-grim reminder of how much time, hard work, and resources can be for naught when a space project cannot execute its mission. We assessed 18 projects in NASA's current portfolio. Four were in the "formulation" phase, a time when system concepts and technologies are still being explored and 14 were in the "implementation" phase, where system design is completed, scientific instruments are integrated, and a spacecraft is fabricated. When implementation begins, it is expected that project officials know enough about a project's requirements and what resources are necessary to meet those requirements that they can reliably predict the cost and schedule necessary to achieve its goals. Reaching this point requires investment. In some cases, projects that we reviewed spent 2 to 5 years and up to $100 million or more before being able to formally set cost and schedule estimates. Ten of the projects in our assessment for which we received data and that had entered the implementation phase experienced significant cost and/or schedule growth from their project baselines.3 Based on our analysis, development costs for projects in our review increased by an average of almost 13 percent from their baseline cost estimates--all in just two or three years--including one that went up more than 50 percent. It should be noted that a number of these projects had experienced considerably more cost growth before a baseline was established in response to statutory reporting requirements. Our analysis also shows that projects in our review had an average delay of 11 months to their launch dates. We found challenges in five areas that occurred throughout the various projects we reviewed that can contribute to project cost and schedule growth. These are not necessarily unique to NASA projects and many have been identified in other weapon and space systems that we have reviewed and have been prevalent in the agency for decades. (1) Technology maturity. Four of the 13 projects in our assessment for which we received data and that had entered the implementation phase did so without first maturing all critical technologies, that is they did not know that technologies central to the project's success could work as intended before beginning the process of fabricating the spacecraft. (2) Design stability. The majority of the projects in our assessment that held a critical design review did so without first achieving a stable design. If design stability is not achieved, but product development continues, costly re-designs to address changes to project requirements and unforeseen challenges can occur. (3) Complexity of heritage technology. More than half the projects in the implementation phase--eight of them--encountered challenges in integrating or modifying heritage technologies. (4) Contractor performance. Six of the seven projects that cited contractor performance as a challenge also experienced significant cost and/or schedule growth. (5) Development partner performance. Five of the thirteen projects we reviewed encountered challenges with a development partner. In these cases, the development partner could not meet its commitments to the project within planned timeframes. This may have been a result of problems within the specific development partner organization or as a result of problems faced by a contractor to that development partner.
The Congress, among others, has been concerned about the academic achievement gap between economically disadvantaged students and their more advantaged peers. The disparity between poor students’ performance on standardized tests and the performance of their nonpoor peers is well documented, and there is broad consensus that poverty itself adversely affects academic achievement. For example, on the National Assessment of Educational Progress (NAEP) reading assessment, 14 percent of fourth grade students who qualified for the free and reduced lunch program (a measure of poverty) performed at or above the proficient level in comparison to 41 percent of those students who did not qualify for the program. Furthermore, research has indicated the importance of socioeconomic status as a predictor of student achievement. Research has shown that the achievement gap falls along urban and nonurban lines as well: students living in high-poverty, urban areas are even more likely than other poor students to fall below basic performance levels. In addition to the achievement gap between poor and nonpoor students, concerns exist that this gap may be related to differences between per- pupil spending among schools that serve poor and nonpoor communities. School district spending is generally related to wealth and tax levels, and differences in school district spending can have an impact on spending at the school level. Recently, efforts have been made to achieve greater spending equity. Using a variety of approaches, a number of states have targeted some additional funding to poor students to amend the unequal abilities of local districts to raise revenues for public schools. Comparing spending between schools in simple dollar terms provides one way to check for differences; however, this type of straightforward comparison may be insufficient to explain spending differences because it does not capture the higher cost of educating students with special needs. Schools with similar spending per pupil may actually be at a comparative disadvantage when adjustments are made to account for differing compositions of student needs. Though not definitive, some research shows that children with special needs—low-income students, students with disabilities, and students with limited English proficiency—may require additional educational resources to succeed at the level of their nondisadvantaged peers. Because these additional resources require higher spending, some researchers have adjusted per-pupil expenditures by “weighting” these students to account for the additional spending they may be required. Weighting counts each student with special needs as more than one student, so that the denominator in the expenditures to students ratio is increased, causing the weighted per-pupil expenditure figure to decrease accordingly. For example, a school with an enrollment of 100 students may have 20 low-income students, 20 students with disabilities, and 10 students with limited English proficiency. Weighting these three groups of special needs students twice as heavily as other students causes weighted enrollment to rise to 150 students. If spending per-pupil is $4,000 without weighting, it drops to $2,667 when weights are applied. The actual size of the weights assigned to low-income students, students with disabilities, and students with limited English proficiency is subject to debate and generally ranges between 1.2 and 2.0 for low-income students, between 1.9 and 2.3 for students with disabilities, and between 1.10 and 1.9 for students with limited English proficiency. The inner city schools selected for our study had high proportions of children in poverty in comparison to the selected suburban schools. The elected inner city schools also generally had more students with limited English proficiency than their suburban counterparts. However, the proportions of students with disabilities in our selected inner city and suburban schools differed within and among metropolitan areas. In Denver, the selected inner city schools consistently had a higher proportion of students with disabilities than the selected suburban schools while in Fort Worth, the suburban schools had a higher proportion of students with disabilities. (See table 1 for total enrollment and percentages of children in poverty, students with disabilities, and students with limited English proficiency for selected schools in the seven metropolitan areas reviewed in this study.) Differences in school spending can affect characteristics that may be related to student achievement. There is a large body of research on factors that may directly or indirectly contribute to student achievement. Spending has been the factor most studied for its effect on student achievement. Differences in student outcomes have also been related to factors such as teacher quality, class size, quality of educational materials, and parental involvement. Our study describes how some of these factors may differ across selected inner city and suburban schools. Differences in per-pupil spending between selected inner city and suburban schools varied by metropolitan areas in our study. Inner city schools in Boston, Chicago, and St. Louis generally spent more per pupil than neighboring suburban schools, whereas selected suburban schools in Fort Worth and New York almost always spent more per pupil than the inner city schools. In Denver and Oakland, no clear pattern of spending emerged. Three factors generally explained spending differences between inner city and suburban schools: (1) average teacher salaries; (2) student- teacher ratios; and (3) ratios of students to student support staff, such as guidance counselors, librarians, and nurses. When we adjusted per-pupil expenditures to account for the extra resources students facing poverty, disabilities, and limited English proficiency might need, inner city schools almost always spent less per pupil than suburban schools. To compensate for additional challenges faced by schools in these areas, federal education dollars are generally targeted to low-income areas. As a result, federal funds have played an important role in increasing funding to inner city schools. Differences between inner city and suburban school per-pupil spending were related to the particular metropolitan area studied and generally seemed to be most influenced by teacher salaries. The selected inner city schools tended to outspend the suburban schools in the Boston, Chicago, and St. Louis metropolitan areas. For example, in the Boston metropolitan area, the lowest spending inner city school spent more per pupil than the highest spending suburban school. (See fig. 1 for a comparison of per-pupil spending at selected inner city and suburban schools in these areas.) In contrast, in the Fort Worth and New York metropolitan areas, suburban schools generally outspent inner city schools. For example, among the selected schools in the Fort Worth metropolitan area, the lowest spending suburban school had per-pupil expenditures 21 percent higher than the highest spending inner city school. (See fig. 2 for a comparison of per- pupil spending at selected inner city and suburban schools in these areas.) In Denver and Oakland, an examination of spending differences among the selected suburban and inner city schools revealed mixed results. That is, analysis of spending differences showed no general pattern of spending that favored either inner city or suburban schools. (See fig. 3 for a comparison of per-pupil spending at selected inner city and suburban schools in the Denver and Oakland metropolitan areas.) Among the schools in our study, three factors influenced per-pupil spending: average teacher salaries, student-teacher ratios, and the ratio of students to student support staff. Average teacher salaries appeared to have the greatest impact on per-pupil spending, followed by lower student- teacher ratios and lower ratios of students to student support staff. Average teacher salaries influenced per-pupil spending in areas where inner city schools spent more per pupil (Boston and Chicago), where suburban schools spent more per pupil (New York), and where spending was mixed (Oakland). For example, in Chicago, where inner city schools generally outspent suburban schools, the median inner city school average teacher salary was $47,851, compared with $39,852 in the suburbs. In Oakland, where spending between suburban schools and inner city schools was mixed, the average teacher salary at the median spending school was $60,395 and per-pupil spending was $4,849, compared with $52,440 and $4,022 at the median spending inner city school. Student-teacher ratios and ratios of students to student support staff were factors that could offset the influence of teacher salaries in explaining per- pupil spending. For example, in Fort Worth, where the three suburban schools typically spent more per student than inner city schools, inner city teacher salaries were generally higher than suburban teacher salaries. However, ratios of students to both teachers and student support staff were lower in our selected suburban schools. For example, the median spending inner city school in Fort Worth had 21 students per teacher, compared with 17 students per teacher in the suburbs. Additionally, the median spending inner city school had 1 student support staff professional for every 162 students, whereas in the suburbs the ratio was 1 to 68. (Table 2 lists factors contributing to higher per-pupil spending—average teacher salaries, student-teacher ratios, and ratios of students to support staff—for the median spending school in each reviewed metropolitan area.) Despite higher per-pupil spending by about half of the inner city schools in our study, inner city schools generally spent less compared with neighboring suburban schools when spending was weighted to account for differing compositions of student needs. To account for the greater costs that may be associated with educating low-income students, students with disabilities, and students with limited English proficiency, some researchers have used formulas that weight these students more heavily than other students. In a similar fashion, we applied weights to our per- pupil expenditure data. The use of the lowest and medium weights had little impact on spending differences between inner city and suburban schools. Inner city schools in Boston, Chicago, and St. Louis continued to outspend neighboring suburban schools in most cases. For example, in Chicago, when students were weighted with the lowest weight, the median per-pupil spending for inner city school was $3,743 per pupil compared with $2,996 for the suburban school. Similarly, the use of medium weights generally did not result in higher per-pupil spending at suburban schools. For example, using medium weights, the median inner city school in Chicago still spent more than the median suburban school, although the difference was smaller—$3,089 compared with $2,858. However, when the highest weight was applied, inner city per-pupil spending fell below suburban school spending in almost all cases. For example, in Chicago when the highest weight was applied, per-pupil spending at the median inner city school was less than that of the suburban school, $2,629 as compared with $2,734. Similarly, in the New York metropolitan area, where suburban schools we reviewed outspent inner city schools, the use of the highest weights to adjust for student needs caused the differences between inner city and suburban school spending to be substantially enlarged. (See fig. 4 for examples of how spending changes as different weights are applied for per-pupil spending at the median inner city and suburban schools in four metropolitan areas.) Because federal programs, such as Title I, specifically target funds to schools in low-income areas, these federal funds generally helped reduce or eliminate the gap between selected inner city and suburban schools in terms of per-pupil expenditures. In the Denver and St. Louis metropolitan areas, federal funds generally eliminated the gap between inner city and suburban schools’ per-pupil spending. In Fort Worth, without federal funds per-pupil spending at the selected inner city schools would have been about 63 percent of selected suburban schools, and in Oakland, per-pupil spending would have been about 78 percent of suburban schools. However, selected inner city schools in Boston and Chicago would have still spent more than suburban schools without federal funds. (See table 3 for a comparison of inner city and suburban per child spending with and without federal dollars.) Factors that may relate to student achievement differed between inner city and suburban schools in our study. Research has shown a positive relationship between student achievement and factors such as teacher experience, lower enrollment, more library books and computer resources, and higher levels of parental involvement. Among the 24 schools we visited, the average student achievement scores were generally lower in inner city than in suburban schools. Along with lower achievement scores, these inner city schools were more likely to have a higher percentage of first-year teachers, whose lack of experience can be an indicator of lower teacher quality. In addition, in comparison to the suburban schools, inner city schools generally were older, had higher student enrollments, and had fewer library books per pupil and less technological support. Finally, the type of in-school parental involvement in the inner city and suburban schools differed. In general, at the schools we visited in the metropolitan areas of Fort Worth, New York, Oakland, and St. Louis, inner city students’ average achievement scores on state reading assessment tests were lower than scores at the neighboring suburban schools. Two schools were exceptions to this pattern. In St. Louis, we specially selected one high-performing inner city school; students at this school performed higher than students at the three suburban schools we visited. In the Fort Worth metropolitan area, one inner city school performed slightly higher than two of the three suburban schools we visited. (See fig. 5 for average student achievement scores for selected schools in the four metropolitan areas.) Although the selected inner city schools’ student achievement scores were generally lower, this pattern did not appear to be related to or consistent with per-pupil spending. That is, higher-performing schools were not necessarily schools that were high in per-pupil spending. For example, per-pupil spending at the highest-performing inner city school in Fort Worth we visited was $3,058, which was higher than one selected inner city school, lower than the other selected inner city school, and lower than each of the suburban schools. First-year teachers in the 24 schools we visited generally constituted a higher percentage of the faculty in inner city schools than suburban schools. First-year teachers comprised more than 10 percent of the teaching staff in 8 of the 12 inner city schools, but the same was true in just 4 of 12 suburban schools. However, both the percent of first-year teachers and differences between inner city and suburban schools varied among the 4 metropolitan areas. (See fig. 6 for the percentage of first-year teachers by school and metropolitan area.) For example, in the New York metropolitan area there were no first-year teachers at 2 of the suburban schools, but at 2 inner city schools first-year teachers were 24 and 13 percent of the faculty. In the Fort Worth metropolitan area, 2 of the suburban schools had almost twice the percent of first-year teachers as the two inner city schools with the highest percent of first-year teachers. Notably, the percentage of first-year teachers was low at the two high- performing inner city schools. In Oakland, the percentage of first-year teachers at the high-performing inner city school was 6 percent, compared with 12 percent at the other two inner city schools. In St. Louis, the high- performing inner city school had no first-year teachers, whereas the other two inner city schools had 11 and 16 percent. As noted earlier in the report, average teacher salaries in large part accounted for most of the differences in school spending. The fact that teaching staff at inner city schools were generally comprised of higher percentages of first-year teachers is not inconsistent with the finding on teacher salaries. The average teacher salary at a school includes the salaries of all teachers in the school, from first-year teachers to the most senior staff. For example, in a school with a high proportion of first-year teachers the average teacher salary could still be higher than that of another school because of higher proportions of tenured teachers and the district’s salary structure. The enrollment of the 12 inner city schools we visited tended to be higher than that of the 12 suburban schools we visited, but enrollment varied across and within metropolitan areas. The national average elementary school enrollment is 443, and schools with enrollments over 600 are considered “large,” regardless of the school’s capacity. In three out of the four metropolitan areas we visited, Fort Worth, New York, and Oakland, the enrollment at the inner city schools was consistently higher than the national average enrollment. In addition, 6 of the 12 inner city schools we visited had enrollments over 600 students. In contrast, enrollments exceeded 600 in only 2 of the 12 suburban schools we visited. (See fig. 7 for enrollments at the selected schools.) Among the schools we visited, most of the inner city schools were older than 50 years, which is higher than the national average of 43 years. Furthermore, 7 of the oldest 10 buildings were inner city schools, 2 having been built in the 19th century. In contrast, most of the suburban schools we visited were less than 40 years old. In addition to the physical condition of the buildings, playground facilities in the inner city schools differed greatly from facilities in the suburban schools. Inner city schools we visited were less likely to have playground equipment and expansive play areas. For example, the playgrounds in St. Louis suburban schools all had green fields and a variety of playground equipment. In this same metropolitan area, only one of the inner city schools had any playground equipment and at the other two schools asphalt lots were the single outdoor recreational facility. Figure 8 shows the playgrounds of an inner city school and a suburban school in the St. Louis metropolitan area. Overall, the inner city schools we visited had fewer library books per child and were less likely to have a computer laboratory than suburban schools. Most of the suburban schools visited were below the national average of 2,585 books per 100 students—7 of the 12 schools had more than 2,000 books per 100 students. However, only 3 of the inner city schools visited had more than 2,000 books per 100 students. For example, in New York City, the 3 selected inner city schools had fewer than 1,000 library books per 100 students, whereas the 3 selected suburban schools had more than 2,000 library books per 100 students and one had more than 3,000. Notably, the high-performing inner city school in St. Louis had 2,813 library books per 100 students, more than any of the suburban schools we visited in that area. Similarly, the high-performing inner city school in Oakland had 2,244 books per 100 students, which was more than the other two Oakland inner city schools and 2 of the 3 selected suburban schools. Furthermore, only 7 of the 12 selected inner city schools had a full-time librarian, whereas all but one suburban school had a full-time librarian. (See fig. 9 for the number of library books per 100 students at selected schools.) Our site visits also revealed a difference between inner city and suburban schools in terms of the presence of a computer laboratory. Eleven of the 12 suburban schools we visited had a computer laboratory, whereas 8 of the inner city schools visited had such a facility. Among schools with computer laboratories, however, the ratio of students to laboratory computers was similar among inner city and suburban schools. Parents of children attending the suburban schools we visited were more involved in on-site school activities than parents of inner city children.According to the suburban school principals, parental involvement in their schools was typically very high and included participation in volunteer activities, attendance at parent-teacher conferences, and providing financial support to the school. Parent volunteerism at suburban schools could be quite substantial. For example, parents at one suburban school in the Oakland metropolitan area provided 24,000 hours of volunteer time during the school year. Inner city principals characterized parents as concerned and interested in their children’s education, though less likely to attend parent-teacher conferences and volunteer in school. A number of inner city principals we interviewed also noted that while parents generally wanted to help their children succeed in school, they often lacked the necessary finances, skills, or education to offer additional assistance beyond that offered by the school. Our findings suggest that spending differences between the inner city schools and suburban schools in our review do exist, but these differences for the most part depend upon the metropolitan area. In some metropolitan areas, inner city schools spent more per pupil whereas in others suburban schools spent more per pupil. Spending differences, regardless of metropolitan area for the most part, seemed to be the result of differences in salaries and student to teacher and staff ratios. However, the very heavy concentration of poverty in inner city schools may place them at a spending disadvantage, even when spending is equal. In addition, the suburban schools, as well as the high-performing inner city schools we visited, generally had more experienced teachers, lower enrollments, more library books per child, and more parental in-school volunteer activities than the other inner city schools in this study. These factors are important to consider in improving the performance of inner city schools. We provided a draft of this report to the Department of Education for review and comment. Education’s Executive Secretariat confirmed that department officials had reviewed the draft and had no comments. We are sending a copy of this report to the Secretary of Education. We will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7215. See appendix III for other staff acknowledgments. The objectives of our study were to provide information on similarities and differences between (1) per-pupil spending in selected inner city and suburban schools and (2) other characteristics that may relate to student achievement, such as, teacher experience, school enrollment, educational materials, physical facilities, and parental involvement. To address the first objective, we reviewed the literature on spending differences, interviewed experts about the issues and approaches to measuring spending data, and collected spending and related school data on 42 inner city and suburban schools. To address the second objective, we examined the literature, interviewed experts about relationships between student achievement and school characteristics, and visited 24 inner city and suburban schools to collect information on student achievement, the quality and availability of educational materials, the condition of the buildings and facilities, and type and extent of parental involvement. This appendix discusses the scope of the study, criteria for selecting metropolitan areas and schools, and the methods employed to describe and explain observed spending differences. This study focused on similarities and differences between inner city schools and suburban schools. This is different and distinct from a study of similarities and differences between urban and suburban schools, or urban and suburban districts, as urban schools and districts generally include a wider range of poverty than inner city schools. This study covered selected inner city and suburban schools in seven metropolitan areas. Metropolitan areas were purposively selected to reflect diversity on the basis of geography and size. We used geographic areas from the Northeast, Midwest, South, and West. Three size categories were used: (1) very large, (2) large, and (3) medium. We defined these by population. Very large: areas where the central city of a metropolitan area had a population of more than 1 million residents; Large: areas where the central city of a metropolitan area had a population between 500,000 and 1 million residents; Medium: areas where the central city of a metropolitan area had a population between 250,000 and 500,000 residents. The metropolitan areas selected for inclusion in the study were Boston, Chicago, Denver, Fort Worth, Miami, New York, Oakland, and St. Louis. Inner city and suburban schools in Miami were dropped from the study because the district did not provide the necessary data. (See table 4 for the selected metropolitan areas.) For this study, in consultation with experts, we defined “inner city” as a contiguous geographic area that (1) had a poverty rate of 40 percent or higher, (2) was located within the “central core” of a city with a population of at least 250,000 persons, and (3) the city is the central city of a metropolitan with a population of at least 1 million persons. We defined suburb as the geographic area that is (1) outside the boundaries of a central city with a population of at least 250,000 persons, (2) inside the boundaries of the metropolitan statistical area (SMSA) of the central city, as defined by the Office of Management and Budget and used by the census, and (3) the metropolitan area has a population of at least 1 million persons. In total, we collected spending data on 42 schools, 21 inner city and 21 suburban public elementary schools in seven metropolitan areas, and gathered information on (1) school-level per-pupil spending and federal revenues, and (2) school, teacher, other staff, and student characteristics for the 2000-01 school year. In addition, we conducted site visits at 24 of the selected schools. These schools were located in the New York, St. Louis, Fort Worth, and Oakland metropolitan areas. We visited them in order to obtain supplementary information on characteristics that might affect student achievement, such as facilities, educational materials, and types of parental involvement. The study was designed to compare “typical” inner city and “typical” suburban schools, rather than those schools with extreme poverty or wealth. We consulted with experts about our design. We used the factors described below to select typical schools. Our goal was to make comparisons that would reflect likely differences, if any, between the inner city and suburban schools in a given metropolitan area. To select the inner city schools, we (1) consulted with local experts in each metropolitan area to identify the geographic area of the central city of the SMSA generally considered the inner city, (2) calculated census child poverty rates for each census tract within the inner city area, (3) retained identified census tracts with census child poverty rates higher than 40 percent, (4) ranked the census tracts by poverty rate, and (5) identified the three inner city census tracts closest to the 50th percentile, that is, the median poverty census tracts of the inner city.We then selected the public elementary school that served those census tracts, but purposely excluded schools that were special schools, for example, magnet schools, science academies, etc. Where possible, we attempted to include one high-performing inner city school in each metropolitan area we visited. We used Dispelling the Myth, an Education Trust (EdTrust) database of high-poverty, high-performing schools, for this selection. Dispelling the Myth is an ongoing EdTrust project to identify high-poverty and high-minority schools that have high student performance or have made substantial improvement in student achievement. We identified schools in that database with a student poverty rate greater than 50 percent and an overall achievement score on the most recent state reading assessment test above the 50th percentile. Because the EdTrust database used free and reduced lunch eligibility as its criterion for poverty, we further verified that the school was located in an inner city census tract as defined by this study serving an area with a census child poverty rate greater than 40 percent. We purposely excluded schools that were special schools, for example, magnet schools, science academies, etc. Inner city schools from the St. Louis and Oakland metropolitan areas met these criteria. The identified high-performing inner city school in St. Louis replaced a selected school. The identified high- performing inner city school in Oakland, however, was a school that would have been selected through the described census tract approach and was, therefore, treated similarly to the other selected inner city schools. (See table 5 for the selected inner city census tracts and child poverty rates.) To select suburban schools, we (1) collected census child poverty rates for all school districts in the defined suburban area outside the central city of the selected metropolitan area and within the same state as the central city; (2) ranked by census child poverty rates in the suburban school districts; and (3) identified the three suburban school districts closest to the 50th percentile, that is, the median suburban school districts, based upon child poverty rates. We dropped districts that were contiguous or had a 5 to 17-year-old population of less than 500 and replaced them with the district with the next closest median level child poverty that did not have any of these attributes. For those districts, we selected the elementary school of the district. If more than one elementary school served the school district, we selected the elementary school in the district with the median child poverty rate (as determined by free and reduced lunch eligibility) for elementary schools in that district. (See table 6 for the child poverty rates for the selected suburban school districts.) From 42 selected schools we obtained detailed information for the 2000-01 school year on (1) school spending and federal revenues, (2) staffing and teacher experience, and (3) student characteristics. The practical difficulties of conducting any data collection effort may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted or in the sources of information that are available can introduce unwanted variability into the results. We took steps in the development of the instrumentation, the data collection, and the data editing and analysis to minimize these errors. We pretested our data collection instrument with the Boston school district and called individual district officials to clarify answers. Completed instruments were examined for inconsistencies, and follow-up calls were made to districts to clarify imprecise responses or data that were unusually different from other respondent data. School spending data included (1) instructional staff salaries, (2) certified professional staff salaries, (3) administrative staff salaries, (4) operations staff salaries, (5) education materials and supplies spending, and (6) building maintenance and repair spending. In addition, schools reported federal sources of revenue. School, staff, and student information included numbers of (1) regular education teachers, special education, English as a second language instructional staff, and other specialized instructional staff, for example, art teachers, reading teachers; (2) regular education teacher assistants, special education teacher assistants, and other instructional staff teacher assistants, for example, art teacher assistants, reading teacher assistants; (3) student support professional and nonprofessional staff by job title; (4) administrators and administrative assistants by job title; (5) operations staff by job title; (6) the number of first-year teachers; (7) total enrollment; (8) number of students with disabilities and number of students with limited English proficiency; (9) race and ethnicity of students; and (10) the number of students eligible for free and reduced lunch. Data on student achievement, facilities, educational materials, and parental involvement that may contribute to academic achievement were obtained from site visits to 12 inner city and 12 suburban schools. We developed a site visit protocol and pretested it at site visits to inner city and suburban schools in the New York and Baltimore metropolitan areas. We obtained information on student achievement. In Fort Worth, we used Grade 3 reading scores on the Texas Assessment of Academic Skills. In New York, we used Grade 4 scores on the State English Language Arts Assessment. In Oakland, we used Grade 4 reading scores on the Stanford 9 test. In St. Louis, we used Grade 3 Communication Arts scores on the Missouri Assessment Program. In each metropolitan area, we contrasted the achievement scores of the selected schools to the state average. Depending upon data, information was collected as a dichotomous variable (yes/no), date or period of time, number, or ranked scale assessment. (See table 7 for school site visit information collected, assessment measure, and description of the measurement scale.) For each metropolitan area, per-pupil spending for each of the three inner city schools and three suburban schools were ordered and paired, that is, the lowest spending inner city school was paired with the lowest spending suburban school, the middle spending inner city school was paired with the middle spending suburban school, and the highest spending inner city school was paired with the highest spending suburban school. To examine factors that explained differences in school spending, we conducted regression analysis. Regression analysis is a statistical methodology that measures the relationship between one variable and one or more other variables. In our regression model, we tried to determine the extent to which total per-pupil spending at a selected individual school could be explained by (1) average teacher salary at the school, (2) adjusted student-teacher ratio at the school, (3) the ratio of students to student support staff at the school, and (4) annual spending at the school on building maintenance and repair. The variables in the model were defined as follows: Total per-pupil spending—total dollars spent by the school in the 2000-01 school year divided by total enrollment. Average teacher salary—total salary expenditure for teachers at the school divided by the number of teachers. Teacher salary was used in the regression to capture the salary structure at the school. Adjusted student-teacher ratio—total enrollment adjusted for students with special educational needs divided by the total certified instructional staff. Adjusted enrollment differed from total enrollment in that the adjusted enrollment included an additional weight of 100 percent for each child receiving special education instruction at the school and 50 percent for students with limited English proficiency. Adjusted enrollment was used to capture the direct higher spending by the school for students with special needs. Teachers included: regular classroom teachers, special education teachers, teachers of students with limited English proficiency, art teachers, music teachers, physical education teachers, reading teachers, teachers for the gifted and talented, science teachers, and computer laboratory teachers.Teaching assistants and paraprofessionals were not included because their direct involvement with instruction was not always certain. The ratio of students to student support staff at the school was computed by dividing the total enrollment by the total certified professional staff. Support staff was not adjusted for students with special needs because it was assumed that at the school level support staff to student time is less dependent upon the disability of the child. Total certified professional staff included: administrators, health providers, and certified staff providing services to students. Spending on building maintenance and repair at the school included contracted maintenance and repair and salary expenditures for building custodians and maintenance workers for the 2000-01 school year. (See table 8 for the regression results for factors explaining differences in per-pupil spending at the selected schools.) Appendix II presents selected data on the 42 schools examined in the seven metropolitan areas, as well as additional information obtained from site visits at 24 schools. This appendix contains three tables of school-level information collected from selected inner city and suburban schools in seven metropolitan areas. Table 9 contains student characteristic information. Student characteristic information includes enrollment, child poverty measured by the census, percent of students with disabilities, percent of students with limited English proficiency, and percent of children that are minority. Table 10 contains actual spending per child, then spending per child at low, medium, and high weights for selected schools in seven metropolitan areas. Table 11 includes information on the percent of first-year teachers, federal dollars per child, and federal dollars as a percent of total spending. In addition to those named above, Elisabeth Anderson, Shannon McKay, Eve Veliz, and Sarit Weisburd made key contributions to this report. Luann Moy provided important methodological contributions to the review of the research. Patrick DiBattista also provided key technical assistance.
The No Child Left Behind Act of 2001 has focused national attention on the importance of ensuring each child's access to equal educational opportunity. The law seeks to improve the performance of schools and the academic achievement of students, including those who are economically disadvantaged. The Congress, among others, has been concerned about the education of economically disadvantaged students. This study focused on per-pupil spending, factors influencing spending, and other similarities and differences between selected high-poverty inner city schools and selected suburban schools in seven metropolitan areas: Boston, Chicago, Denver, Fort Worth, New York, Oakland, and St. Louis. Among the schools GAO reviewed, differences in per-pupil spending between inner city and suburban schools varied across metropolitan areas, with inner city schools spending more in some metropolitan areas and suburban schools spending more in other areas. The inner city schools that GAO examined generally spent more per pupil than suburban schools in Boston, Chicago, and St. Louis, while in Fort Worth and New York the suburban schools in GAO's study almost always spent more per pupil than the inner city schools. In Denver and Oakland, spending differences between the selected inner city and suburban schools were mixed. In general, higher per-pupil expenditures at any given school were explained primarily by higher staff salaries regardless of whether the school was an inner city or suburban school. Two other explanatory factors were student-teacher ratios and ratios of students to student support staff, such as guidance counselors, nurses, and librarians. Federal funds are generally targeted to low-income areas to compensate for additional challenges faced by schools in those areas. In some cases, the infusion of federal funds balanced differences in per-pupil expenditures between the selected inner city and suburban schools. There is a broad consensus that poverty itself adversely affects academic achievement, and inner city students in the schools reviewed performed less well academically than students in the suburban schools. The disparity in achievement may also be related to several other differences identified in the characteristics of inner city and suburban schools. At the schools GAO visited, inner city schools generally had higher percentages of first-year teachers, higher enrollments, fewer library resources, and less in-school parental involvement--characteristics that some research has shown are related to school achievement.
Diabetes affects a significant portion of Medicare beneficiaries and results in an even larger share of Medicare costs. Diagnosed cases of diabetes are estimated to be 10 to 15 percent of the Medicare population, or roughly 3 million to 5 million people, and nearly as many cases may be undiagnosed. According to one estimate, treating people with diabetes may account for as much as 25 percent of all Medicare costs. People who have diabetes use more health services than nondiabetics: they have two to three times more ambulatory contacts (physician, emergency room, and hospital outpatient visits), three times more hospitalizations, and are more likely to live in nursing homes. Moreover, diabetes is the leading diagnosis associated with use of Medicare’s rapidly growing home health services, representing about 10 percent of all home health visits. In addition, complications of the disease clearly can diminish quality of life. Diabetes is a leading cause of blindness, end-stage renal disease, and lower extremity amputations; and people with diabetes have rates of coronary heart disease and stroke that are two to five times those of nondiabetics. Diabetes experts generally agree that routine provision of several preventive and monitoring services can help physicians and patients manage the disease more effectively and control its progression. A 1993 landmark study, known as the Diabetes Control and Complications Trial (DCCT), and other studies have provided evidence of opportunities for improving care. The DCCT showed that improved glucose control can retard the onset and progression of the complications of diabetes. The American Diabetes Association’s (ADA) current recommendations for diabetes management, the most frequently cited clinical practice guidelines for diabetes care, reflect these studies’ results. Most of the ADA-recommended preventive and monitoring services are covered benefits for Medicare beneficiaries with diabetes. Excluded as covered benefits, however, are some services and supplies that might facilitate active patient self-management. For example, people in traditional, fee-for-service Medicare (about 90 percent of all beneficiaries) bear the costs of insulin, syringes, and, in some cases, glucose test strips used to help monitor their blood sugar levels at home. For those beneficiaries enrolled in an HMO (about 10 percent of Medicare beneficiaries nationwide), these supplies and services may or may not be included in the benefit package, depending on the HMO. Some members of the Congress have proposed legislation that would expand Medicare coverage to include payment for diabetes education in an outpatient, nonhospital-based setting, as well as payment for blood-testing strips for all beneficiaries with diabetes. Under both fee-for-service and HMO delivery, Medicare beneficiaries with diabetes are falling far short of receiving recommended levels of monitoring services, according to available evidence. A number of factors, both patient- and physician-related, may contribute to the low use of these services. The ADA clinical care guidelines reflect the published evidence and expert opinion on what constitutes quality diabetes care. The guidelines recommend monitoring services that with appropriate follow-up and treatment, may lead to improved health outcomes. Receiving these monitoring services, however, does not guarantee improved blood sugar control or prevention of complications. Nonetheless, experts generally agree that providing the monitoring services recommended by the ADA represents good diabetes care. Among the ADA’s recommendations for people who have noninsulin-dependent diabetes (more than 90 percent of diabetics in Medicare), we selected six monitoring services (see table 1) that can be measured using Medicare claims data. Several other recommended services were excluded because all occurrences could not be identified by this methodology. For example, foot examinations to detect people at elevated risk of ulcers and infections (and to prevent lower extremity amputations), when provided, are most likely to be part of an office visit and if so would not be claimed as a separate service. The recommended service frequencies specified in table 1 generally apply to the average person with noninsulin-dependent diabetes. However, some debate surrounds the most appropriate frequencies for certain individuals, particularly older people with diabetes: for example, whether the eye exam should be provided annually or whether providing it every 2 years is just as effective. Some individuals may need more or fewer services depending on their age, medical condition, whether they use insulin, or how well their blood sugar is controlled. According to an ADA representative, a small percentage of people with diabetes could appropriately receive certain recommended services at reduced frequency. Overall, our cohort of about 168,000 Medicare beneficiaries with diabetes fell far short of receiving the recommended frequencies of the six monitoring services in 1994. As figure 1 shows, Medicare beneficiaries with diabetes had the opportunity to receive such services because 94 percent of them had at least two physician visits in 1994. In fact, the mean number of physician visits was 9.5. However, less than half of these beneficiaries with diagnosed diabetes received an eye exam (42 percent), only 21 percent received the two recommended glycohemoglobin tests, and only about half (53 percent) had a urinalysis. More Medicare beneficiaries with diabetes (70 percent) received a serum cholesterol test than any of the services except physician visits. This may reflect both the successful public education campaign of the late 1980s about cholesterol risks and the frequent inclusion of cholesterol in automated multichannel blood tests. The annual flu shot is likely to be underreported in Medicare claims data because many people receive flu shots in nonmedical settings such as shopping malls and business offices. One HCFA official estimated that Medicare claims may underreport the number of flu shots received by as much as 20 percentage points. Utilization rates are even lower when considering the monitoring services as a unit. (See fig. 2.) About 12 percent of the Medicare beneficiaries with diabetes in our cohort did not receive any of the following key monitoring services: at least one eye exam, one glycohemoglobin test, one urinalysis, and one serum cholesterol test. About 11 percent of beneficiaries showed Medicare claims for all four of these services. Utilization rates for the six monitoring services by patient age, sex, race, and geographic characteristics were as follows: Utilization rates were generally similar for men and women and for all age groups over age 65. The single most notable utilization difference was in the annual eye examination rate for people with diabetes under age 65. Forty-three percent of people with diabetes aged 65 to 74 and 44 percent of those aged 75 and older received an eye exam, compared with only 28 percent of the disabled in Medicare under age 65. White Medicare beneficiaries with diabetes received the six monitoring services at consistently higher rates than did beneficiaries who were black or of another racial group, but for most services the differences were not great. For example, the utilization rate for the eye exam was 43 percent for whites, 36 percent for blacks, and 37 percent for beneficiaries of other races. The rate for at least one glycohemoglobin test was 39 percent for whites, 31 percent for blacks, and 37 percent for beneficiaries of other races. The use of diabetes monitoring services varied by geographic area. For example, among the 10 states that had the largest Medicare fee-for-service diabetes populations in our study, Florida and New York had the highest percentages of beneficiaries with diabetes who received all four key services, at 18 and 16 percent, respectively; Pennsylvania had the lowest rate, 8 percent. As another example of this variation, of all 50 states and the District of Columbia, Nebraska had the highest eye exam rate (54 percent), and Alabama had the lowest (32 percent), followed by Tennessee and Oregon (33 percent). Seventy-four percent of our Medicare fee-for-service diabetes cohort lived in Metropolitan Statistical Areas (MSA) and the remaining 26 percent lived in non-MSAs, generally rural areas. Monitoring services’ utilization rates were slightly but consistently higher for beneficiaries living in MSAs, as a whole, than for those living outside MSAs. (Detailed data on service utilization rates by these characteristics appear in app. I.) Because HCFA does not require its HMO contractors to report patient- specific utilization data, we could not systematically assess the use of recommended monitoring services by beneficiaries with diabetes in Medicare HMOs. Unlike fee-for-service providers, Medicare HMOs are paid a monthly rate per enrollee, regardless of the actual services provided. Therefore, to be paid, the plans do not need to document utilization, costs of care, or patient case mix. Individual plans, however, may develop such information for in-house management purposes. Diabetes monitoring services’ utilization rates are also below recommended levels in Medicare HMOs, according to the limited data we obtained from published research and other sources. For example, the HMO component of HCFA’s Ambulatory Care Diabetes Project, including 23 health plans that volunteered as project participants in five states (California, Florida, Minnesota, New York, and Pennsylvania), determined that 61 percent of Medicare enrollees received an eye exam in an 18-month period ending in 1995; 69 percent received at least one glycohemoglobin test. Another indicator of the level of monitoring services provided to people with diabetes in HMOs is the eye exam rate reported in the Health Plan Employer Data and Information Set (HEDIS), a standardized, voluntary HMO performance reporting system developed by the National Committee on Quality Assurance (NCQA). HEDIS data are the most commonly used HMO performance measures for the non-Medicare, employer-insured HMO population. Nationwide, the average diabetic eye exam rate reported by HMOs participating in HEDIS was 42 percent in 1995, but the rate varied widely among the few plans whose reports we obtained, ranging from 20 to 70 percent. Although it is unclear whether these rates also apply to Medicare beneficiaries with diabetes enrolled in HMOs, the national average rate of 42 percent was the same rate we found in our 1994 Medicare fee-for-service population. Although it is unclear what specifically accounts for the less-than- recommended use of monitoring services, diabetes experts have identified several factors, including patient and physician attitudes and practices, that contribute to suboptimal diabetes management in general. Many of these factors are not unique to diabetes management; they also affect delivery of preventive care for many other chronic conditions. Experts agree that the patient bears much of the responsibility for successful diabetes management. For a variety of reasons, however, people with diabetes may not actively manage their disease. Lack of knowledge, motivation, and adequate support systems are often cited as key reasons. People with diabetes may not fully understand the seriousness of their disease or the need for regular preventive and monitoring services. Consequently, they may not always follow up on routine appointments and referrals. For many, diabetes self-management does not become a priority until serious complications develop. Then, difficult changes in well-established habits, such as diet, lack of exercise, and smoking, may be needed. A family support system is important to help patients make such changes, but it is often lacking. Experts have also noted that the substantial out-of-pocket costs for people with diabetes—that can result from incomplete insurance coverage for diabetes-related supplies, such as insulin, syringes, and glucose-testing strips—may discourage some people with diabetes from actively managing their disease. For example, syringes may cost about $10 to $15 per 100, insulin costs about $40 to $70 for a 90-day supply, glucose-testing meters cost from $50 to $100, and glucose-testing strips cost $.50 to $.72 each (or about $1,000 a year for a person with diabetes who tests four times a day). Physicians and other health care providers also may contribute to low utilization rates for recommended services, according to literature reports and experts we contacted. Some physicians may not be well versed in the latest diabetes care guidelines, or they may not know of recent research demonstrating the efficacy of treatments. Others may disagree with the need for all recommended services for all patients or, specifically, with the recommended frequency of services. Some physicians may be discouraged from active diabetes management with older patients because, though some monitoring services may identify complications, they do not prevent them; and without patient behavior changes, health outcomes are unlikely to improve. Another important factor affecting physician practices is the severity of a patient’s diabetes and the extent of other medical problems. Many Medicare beneficiaries with diabetes have several serious medical conditions. We were told that during a patient visit, a physician is likely to focus on a patient’s most urgent concerns, neglecting ongoing diabetes management and patient education. Finally, inadequate support systems for many providers may contribute to less-than-recommended service delivery, according to some reports. Managed care plans and physician practices may lack automated medical records and service-tracking systems that could provide timely records of patient service use and reminders when routine preventive and monitoring services, such as those for diabetes, are needed. Collectively, the 88 HMOs in our survey reported a wide range of diabetes management efforts; in general, however, most plans’ efforts are limited. The HMOs identified more than 30 different kinds of diabetes management activities, ranging from featuring articles on diabetes in their publications to monitoring the degree to which their physicians are providing preventive services. The type and number of reported activities varies greatly: a few HMOs have comprehensive diabetes management programs, but most plans’ efforts are much more limited. HMOs told us that they have focused their efforts on educating people with diabetes about self- management and their physicians about the need for recommended preventive and monitoring services. Even HMOs with comprehensive diabetes management programs have initiated their efforts mostly in the past 3 years. As a result, little is known yet about the effectiveness of these efforts or which approaches work better than others. Although we did not survey fee-for-service group practices on their diabetes management approaches, several of these groups also may be exploring ways to improve diabetes care in response to the DCCT research findings and practice guidelines. For example, one multispecialty group practice has established a comprehensive diabetes education and treatment center, and another group told us they have started to monitor utilization of the diabetic eye exam and have implemented a quality- improvement program to increase utilization. Every HMO in our survey reported using at least one type of effort to educate enrollees with diabetes about appropriate diabetes management. Following are examples of the kinds of approaches they reported: Written materials: The most common approach (used by 82 of the 88 plans) is featuring articles about diabetes management in publications directed to all enrollees. Other approaches include placing brochures about diabetes management in physicians’ waiting rooms and making a comprehensive manual on diabetes care available to all enrollees with diabetes. One-on-one educational sessions: Sixty-eight HMOs reported having diabetes-related health professionals, such as nurses, certified diabetes educators, or other specialists, provide diabetes education to individuals with diabetes. During our follow-up interviews with 12 plans, the HMOs reported a wide variety of approaches to educating such enrollees, from regular meetings with experts on exercise and nutrition to a telephone- advice service that fields enrollees’ questions about diabetes. Classes: During our follow-up interviews, we learned that a number of HMOs offer classes for several levels of diabetes education: basic classes for people newly diagnosed with diabetes, intermediate classes to provide ongoing management support, and advanced classes for people with diabetes who want to learn how to closely control their blood sugar levels. Besides educational efforts for enrollees, most HMOs said they also had begun educational efforts for physicians. Commonly used techniques to educate physicians on the importance of preventive care include sending written materials (reported by 71 plans) and holding meetings with groups of physicians (46 plans). Nearly three-fourths of the HMOs reported using clinical practice guidelines on diabetes care. Some supplement these guidelines with more intensive education. For example, one HMO reported that its endocrinologists meet regularly with small groups of primary care physicians to provide training on important diabetes topics, such as diabetic eye disease and foot care. The plan has also developed a physician training video on diabetic foot care. Some of the HMOs—10 of the 88 we surveyed—contract with disease management companies to provide diabetes education services. One such company, for example, offers what they call three platforms of services: (1) educational mailings, (2) telephone-based education and counseling, and (3) face-to-face education and counseling. For a fixed, per person, monthly fee, which varies by the platform selected by the contracting group, the disease management company provides services to any of the plan’s enrollees with diabetes who choose to participate. Although education may effect short-term behavioral changes, some experts express concern about the difficulty people with diabetes and physicians have in maintaining behavioral changes. Information about managing diabetes is essential to good control of blood sugar levels, but information alone may not be enough to motivate the behavior and lifestyle changes necessary to maintain such control. For example, one diabetes expert told us that many people with diabetes revert to old behaviors within 6 months unless they receive additional education or support. As the director of diabetes clinical research at a large pharmaceutical firm put it, “the successful implementation of good diabetes management, through good control of blood sugar levels, can only be achieved through significant daily changes in lifestyle by the diabetic. This is very hard to do.” HMOs reported using a wide variety of approaches to continuously encourage appropriate diabetes management. Following are some of the approaches they reported: Reminders to enrollees and physicians: About half of the HMOs reported one or more such efforts. For example, one HMO provides a small, wallet- sized “scorecard” to enrollees with diabetes that lists recommended annual services and has a chart for enrollees to record the dates they receive each service. One HMO posts signs in examining rooms reminding people with diabetes to remove their shoes and socks to prompt physicians to check patients’ feet, and another attaches service reminder sheets to enrollees’ charts when they come in for any visit. Performance monitoring and feedback: Many health plans are trying to improve preventive care utilization rates by providing feedback to physicians on their compliance with recommended standards. Of the 62 plans that reported use of a clinical practice guideline for diabetes, 52 have a system to monitor physicians’ compliance with it. The plans are most likely to monitor utilization of services related to HEDIS reporting requirements, and some reported systems to convey such utilization results to their physicians. Diabetes registries: HMOs reported maintaining regularly updated registries of their enrollees with diabetes to monitor overall compliance with recommended standards and to mail them information and appointment reminders. For example, one HMO uses its registry and its claims records to mail a reminder letter to enrollees who have not received an eye exam in the past year. Another plan combines its diabetes registry with pharmacy, laboratory, and billing data, all of which can be accessed by physicians to review a patient’s use of services and determine which services should be provided. Diabetes clinics: A few HMOs reported offering regular comprehensive diabetes care clinics. This involves the HMO setting aside days when people with diabetes can see their physicians, a nutritionist, a podiatrist, and other specialists and receive all necessary laboratory services in a single visit. One HMO reported the hope that these clinics would encourage self-sustaining diabetes support groups, while reducing the number of physician office visits. Support systems: One HMO has been providing education and support to Medicare beneficiaries who have diabetes or asthma through a voluntary, confidential, toll-free telephone system. Nurse counselors trained in these chronic diseases answer health care questions, provide education, and encourage self-management skills. Five of the HMOs reported committing substantial resources to develop a systemwide comprehensive diabetes management program. For example, one HMO we contacted has established a population-based approach to diabetes management, with long-term goals of improving patient health status and satisfaction as well as performance on cost and utilization. The HMO measures patient outcomes with both clinical and subjective values, which range from improved blood glucose control and prevention of microvascular disease to the patient’s assessment of improved quality of life and sense of well-being. The plan relies on a variety of interventions to meet enrollees’ needs, including diabetes chronic care clinics at several family practice sites, patient self-management notebooks, and diabetes telephone education. Interventions designed to help physicians provide better care to enrollees with diabetes include an online diabetes registry for physicians that is updated monthly, use of evidence-based clinical practice guidelines, outcomes reports for physicians, and provider education and training by diabetes expert teams consisting of an endocrinologist and a nurse. These teams travel to all family practice sites several times each year to see patients jointly with the family practice teams. HMOs in our survey generally had little information about the extent to which their diabetes management approaches have affected the use of recommended monitoring services. Even the plans reporting the most comprehensive approaches told us that they collect utilization data on five or fewer services and began collecting this information in 1993 or 1994. Some HMOs said they collect no such data. The service monitored most often (by 58 HMOs) was the diabetic eye exam, probably because HEDIS, the performance-reporting system for commercial HMOs, requires plans to measure the percentage of their enrollees with diabetes under age 65 who receive an annual eye exam. Although little information exists on the relative effectiveness of specific approaches, most experts generally believe that intensive and sustained interventions are most likely to support long-term behavior change. For example, one disease management company told us that its in-person counseling and education program is likely to be more effective at improving utilization rates than communicating with enrollees by telephone or mailings. Because intensive interventions are probably more expensive to provide than other approaches, measuring their effectiveness is important. Of the 88 plans surveyed, 13 reported having information about the effect of their diabetes management efforts on the service use or health outcomes of their enrollees with diabetes or on the costs to their plans. This is largely because most diabetes management programs are relatively new, and plans do not have systems established to collect and analyze data on outcomes or cost. From the plans that reported information about the effectiveness of their diabetes management efforts, we heard the following: Using a variety of strategies, one HMO has shown improved utilization and outcomes. Annual eye examinations increased from 47 percent of enrollees with diabetes in 1994 to 53 percent in 1995, and glycohemoglobin test results showed that the percentage of enrollees with diabetes in good or optimal control improved from 35 to 39 percent. Officials of another HMO believe that increased utilization of annual eye exams and glycohemoglobin testing, measured over a 2-year period, are attributable to a program that includes mailings to people with diabetes and an annual performance report for physicians. To increase utilization of the eye exam, the HMO used its diabetes registry to identify 24,000 enrollees with diabetes who had no record of ever receiving an eye exam. After sending letters to those enrollees and their physicians, the plan found that 2,640, or 11 percent, went for an eye exam within 3 months, and, as a result, 48 were referred for appropriate treatment. One HMO found that enrollees’ glycohemoglobin values improved by 16 percent after the HMO established a diabetes management program, including a 2-day self-management class for enrollees newly diagnosed with diabetes, quarterly follow-ups with a certified diabetes educator or registered nurse, quarterly reminder letters about scheduling appointments, and a communication system for the plan’s multidisciplinary diabetes team. According to plan officials, in many cases, their enrollees were able to stop taking insulin and control their diabetes with other methods. HCFA has identified diabetes as a major health problem in the Medicare population and has targeted the disease for special initiatives to improve physician and patient awareness, service delivery, and, ultimately, patient health outcomes. As in the private sector, however, most of HCFA’s diabetes management initiatives are either new or not yet under way; therefore, clear evidence on which approaches are most effective is not yet available. In addition, some experts suggest that the agency should do more to encourage improved diabetes management. Four years ago, HCFA officials crafted a strategic plan for the agency that was designed to move it from its traditional role as a payer to that of a responsible, value-based purchaser. HCFA’s mission includes not only protecting the fiscal soundness of HCFA programs, but also ensuring access to affordable, quality health services for its beneficiaries to improve their health status. To this end, HCFA officials determined that diabetes care was a suitable target for action initiatives. HCFA has started several types of initiatives designed to educate beneficiaries and physicians about diabetes management and to encourage increased use of recommended services. These initiatives are based on the belief that if beneficiaries and providers know about the steps involved in effectively managing diabetes, and if systems are in place to help remind them when certain services are needed, then both may take a more active role in ensuring that appropriate diabetes services are delivered. Following are some of HCFA’s initiatives in this area: Nationwide Diabetes Education Program: HCFA is actively participating in the National Diabetes Education Program, organized by CDC and the National Institute of Diabetes and Digestive and Kidney Diseases, part of the National Institutes of Health. This program is designed to increase general public awareness of diabetes as well as patient and provider education about diabetes and practice guidelines. A draft program plan is expected by June 1997. Local projects to encourage utilization: HCFA contracts with peer review organizations (PRO) to conduct local projects to improve the quality of care for Medicare beneficiaries. Working with the HCFA regional offices, PROs currently are required to implement at least one diabetes-related quality-improvement project involving the providers in their states. Twenty-one PROs have reported a total of 33 diabetes-related projects now under way. For example, the PRO in the state of Washington has developed a method, using Medicare claims data, for identifying beneficiaries with diabetes who are at high risk of lower extremity amputations and encouraging them to get therapeutic shoes to prevent such complications. In addition to fee-for-service quality projects, many PROs are working with HMOs to develop strategies for improving diabetes care, including patient information mailings and physician reminder systems. In Arizona, the PRO has collected baseline data on 15 quality indicator services from six participating HMOs. Together, they have implemented a variety of interventions, including the creation of diabetes databases, special referral and education for noncompliant patients, and the provision of diabetes services to homebound patients. After 1 year of implementation, the quality indicators services have improved by 38 percent. Multistate evaluation of intervention strategies: HCFA’s Ambulatory Care Diabetes Project involves fee-for-service and HMO providers and PROs in eight states. The two-part project has completed baseline data collection on diabetes service utilization. The intervention stages have been completed, and the remeasurement phase began on January 1, 1997. Participating HMOs have been developing a wide variety of interventions not limited to education, such as reminders to enrollees and physicians and special incentives for beneficiaries. HCFA also has committed to encouraging the development of better data- collection systems for tracking service use. The agency is planning several initiatives to develop better information on utilization: Application of HEDIS performance measures in Medicare: This year, for the first time, HCFA will require its HMO risk and cost contractors to report the new HEDIS 3.0 performance measures, including the diabetic eye exam rate and flu shot rate. A measure of the glycohemoglobin test may be added in the future. HCFA eventually plans to release this information as part of a comparative “report card” on Medicare HMOs to help beneficiaries choose among plans. Expansion of performance measurement to include fee-for-service: HCFA is considering pilot tests to determine the feasibility of expanding performance measurement to include fee-for-service beneficiaries in addition to HMO beneficiaries. Such an expansion would most likely include the diabetes measures used in HMO plans and examine performance at both the community level and for beneficiaries receiving care from large group practices. Development of other measurement systems: HCFA is supporting the development of other process- and outcomes-based performance- measurement systems for monitoring diabetes care. Specifically, HCFA awarded a contract to the RAND Corporation to refine quality-of-care measures, including diabetes measures, developed by the Foundation for Accountability. These measures may be tested in Medicare HMOs and fee-for-service in 1997, and, if successful, HCFA may consider adopting them as a reporting requirement in 1998. Registry of beneficiaries: HCFA’s Office of Research and Demonstrations is planning an ongoing registry of a representative sample of Medicare beneficiaries in fee-for-service and HMOs that would provide a study population for regular surveys of health status, health history, and socioeconomic and functional status. This new program would provide a valuable database for a wide range of studies, including research on the chronically ill, such as people with diabetes. Because several of HCFA’s diabetes management initiatives have started only recently, and others are still in the planning stages, it is not yet possible to determine which of these projects are most likely to be effective. Some experts have suggested that HCFA should do more, including the following: test the effects of easing potential barriers to active diabetes self- management, such as the current limitations on coverage of supplies (including blood-testing strips) and diabetes self-management education; implement incentive systems to reward physicians for achieving and maintaining practice changes that promote better health outcomes;test diabetes management programs, such as mailed reminder cards or a telephone counseling service, with voluntary Medicare patient participation; and support provider-certification programs specifically for diabetes care that are being developed by professional organizations. Diabetes care is a microcosm of the challenges facing the nation’s health care system in managing chronic illnesses among the elderly. The prevalence and high cost of diabetes make it an opportune target for better management efforts. When beneficiaries receive less than the recommended levels of preventive and monitoring services, the result may be increased medical complications and Medicare costs. On the other hand, following the recommendations may enhance beneficiaries’ quality of life. Effectively managing diabetes is hard to accomplish, however, and requires a concerted effort by beneficiaries and physicians. People with diabetes often do not understand or fully appreciate the seriousness of their disease nor the potential for serious complications. Physicians, whether in fee-for-service or managed care, may not take all steps necessary to ensure that their patients with diabetes receive recommended preventive care. Among HMOs, where coordinated care and prevention are expected to receive special emphasis, many plans are exploring ways to improve diabetes management through reminder systems, telephone hot lines, incentive programs, group clinics, and other approaches. In general, however, providers may be reluctant to invest in more targeted and expensive approaches until their cost-effectiveness is more evident. Recognizing the importance of this issue, HCFA has initiated a reasonable and promising strategy of testing a variety of approaches to learn what works in Medicare—that is, what is effective and what can be implemented at reasonable cost. HCFA officials generally agreed with the information and issues discussed in a draft of this report, noting that, “interventions to prevent the progression of early complications . . . cause significant morbidity are of key importance to this high risk population.” They raised one conceptual issue on the appropriate quality of care for elderly diabetes patients. Most Medicare beneficiaries with diabetes have had the disease for many years and are likely to have other serious chronic conditions. Therefore, the appropriate frequency of certain monitoring services, such as glycohemoglobin testing, should depend on the treatment regimen for an individual patient, rather than a generic recommendation. HCFA officials also provided a number of technical suggestions that we incorporated where appropriate. A copy of HCFA’s comments appears in appendix III. We recognize that the service and frequency recommendations in the ADA guidelines are not standards to be applied absolutely to every Medicare beneficiary with diabetes but represent good care for an average person. Because we examined the records for more than 168,000 Medicare beneficiaries, we believe our conclusions on aggregate underperformance of preventive and monitoring services are accurate. In addition, we obtained comments on our draft report from several experts in diabetes care and public health. They generally agreed with our finding that the use of diabetes preventive and monitoring services could be improved. Like HCFA officials, they observed that differences among individuals with diabetes may justify some variation in the use of recommended services. We responded to these points and incorporated technical comments as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to interested parties and make copies available to others on request. Please call me on (202) 512-7119 if you or your staff have any questions. Major contributors to this report are listed in appendix IV. A 1995 HCFA study of eye examinations for Medicare beneficiaries with diabetes in the state of Washington provided a model for identifying people with diabetes and specific services in the Medicare claims data. We modified that model to address our research question on the basis of published research in the field, consultation with HCFA officials involved in similar studies and a Medicare part B carrier, and input from an informal panel of expert reviewers. The analysis was performed in three steps: (1) selecting a cohort of Medicare beneficiaries with diabetes, (2) adding beneficiary data to select only people who were enrolled in Medicare fee-for-service and part B during the entire study period, and (3) analyzing cohort characteristics and 1994 service utilization rates. This appendix describes the general methodology and results. We used HCFA’s 5% Sample Beneficiary Standard Analytical File (SAF) to obtain a nationwide representative sample of Medicare beneficiaries. This file contains final action claims data for a 5-percent sample of Medicare beneficiaries. We determined that this file would provide a sufficient number of claims from which to select a representative sample of Medicare fee-for-service beneficiaries with diabetes. We limited this part of our analysis to two parts of the 5% Sample Beneficiary SAF—Inpatient Data and Physician/Supplier Data—for calendar years 1992 and 1993. We did this because our selection criteria involved only inpatient hospital and physician services. To be selected for our cohort, a beneficiary had to have had at least one inpatient hospital admission or two physician visits coded for diabetes. Because we wanted to measure the extent to which Medicare beneficiaries with diabetes received recommended medical services, we selected only beneficiaries we could positively identify as having diabetes. HCFA officials advised us that hospital inpatient claims noting a diagnosis of diabetes were reliable. Therefore, we required only one hospital inpatient admission for selecting a beneficiary. Physician/Supplier Data, however, might note a diabetes diagnosis when a beneficiary was being tested for diabetes, even if the test result was negative. Therefore, to avoid selecting people without diabetes, we required beneficiaries to have had at least two physician visits with a diagnosis of diabetes before selecting them on the basis of physician visits alone. To eliminate selections based on a physician office visit (claim 1) and a laboratory or other procedure arising from the same visit (claim 2), we selected only claims coded as “face-to-face” physician visits. After adding enrollment and eligibility data to our diabetes cohort records, we could delete certain beneficiary groups from our sample. First, we excluded all beneficiaries with a date of death on or before December 31, 1994, because these people would not have had a complete year’s service history for 1994. We also excluded beneficiaries who were not enrolled in part B (for coverage of physician services) for all of 1994. They might have received services for which they paid themselves, and Medicare would have had no record of the services. Likewise, we excluded beneficiaries who were enrolled in an HMO at any time during the year because Medicare would have had no claims records for the services they received while in the HMO. Finally, after reviewing preliminary data, we excluded (1) end- stage renal disease beneficiaries because we could not determine whether some services we were looking for had been put under a different procedure code and (2) beneficiaries with diabetes living outside the 50 states and the District of Columbia. During this step, we also resolved changes in beneficiary identification numbers and obtained current residence and demographic data. We used the Enrollment Data Base and Health Insurance Skeleton Eligibility Write- Off files for this purpose. The last step was to determine the services received by our diabetes cohort in 1994 by comparing the cohort with the 1994 5% Sample Beneficiary SAF. This time, we checked all six component claims files: Inpatient, Hospital Outpatient, Physician/Supplier, Skilled Nursing Facility, Home Health, and Hospice. We also checked a special file of influenza vaccinations developed by HCFA. We searched the claims files for procedure codes for six diabetes preventive and monitoring services recommended by the American Diabetes Association (ADA): physician visits, glycohemoglobin test, dilated eye examination, urinalysis, serum cholesterol test, and influenza vaccination. We determined the number of beneficiaries in our cohort who received each of the services as well as combinations of services. These numbers provided numerator data to calculate the percentage of cohort members with diabetes who received the services at recommended intervals. The denominator was the total number of beneficiaries with diabetes that we identified in our final cohort (that is, the 168,255 beneficiaries who were alive through 1994 and continuously enrolled in Medicare part B and fee-for-service). We analyzed the six service utilization rates by patient age, race, sex, Medicare eligibility category, and state and Metropolitan Statistical Area of residence. Tables I.1 to I.7 provide detailed data from some of these analyses, along with a demographic description of the final 1994 Medicare fee-for-service diabetes cohort. Determining service utilization rates using Medicare claims data presents potential sources of bias. On the one hand, rates based on services identified in the claims data may underestimate actual utilization because claims or billing data may be miscoded, incomplete, or missing. When people receive services in nonmedical settings or if for any reason a bill is not submitted to Medicare, no record of the service appears in claims data. We believe influenza vaccination is the service most affected by such underreporting in our study, but underreporting may apply to other services to a lesser extent. On the other hand, our rates may be overstated because our cohort consists of Medicare beneficiaries with a known diagnosis of diabetes who used diabetes-related services in 1992, 1993, and 1994. These individuals had relatively strong ties to the health care system and were perhaps more likely than the average beneficiary to be referred to and follow up on recommended services. Nonetheless, these potential biases are not great enough to invalidate our findings. In interpreting our results, it should be noted that (1) service utilization rates are not adjusted to reflect differences in the severity of diabetes or the extent of comorbidities among cohort members; (2) physicians and diabetes experts may disagree about optimal frequencies for some monitoring services in some patients because research evidence may be inconclusive and individual patients vary in age, comorbidities, and other factors; and (3) performing monitoring services as recommended does not ensure improved health outcomes. Some studies have shown, for example, that increased frequency of glycohemoglobin testing has not been associated with improved blood glucose values. Glycohemoglobin (two) (continued) Glycohemoglobin (two) This appendix discusses our examination of diabetes management efforts by Medicare HMOs. It briefly describes our methodology and the key findings from our survey. To better understand the approaches to diabetes management used by HMOs, we conducted a telephone survey of nearly half of the current Medicare risk-contract plans. We selected plans that had (1) enrollment of at least 1,000 Medicare beneficiaries (as of April 1996) and (2) a contract effective date no later than December 31, 1993. By using minimum enrollment and participation date as selection criteria, we could eliminate plans with so few Medicare enrollees that their population of enrollees with diabetes might be too small to warrant special diabetes management efforts and plans new to Medicare that might not be fully familiar with the special needs of Medicare enrollees. Of the 201 Medicare risk-contract HMOs operating in April 1996, 90 plans met these criteria, and we interviewed representatives of 88 of the plans (2 plans did not participate). Data on plan characteristics were obtained from HCFA reports and officials (see table II.1). The telephone survey, consisting of 23 multiple-choice and open-ended questions, was designed to determine each HMO’s specific approaches to diabetes management. The questions addressed interventions targeted to plan enrollees and physicians, as well as plan-level activities, such as the HMO’s ability to identify its enrollees with diabetes and monitor utilization rates of recommended services. To administer the survey, we interviewed the individual identified by the plan as being most familiar with plan approaches to diabetes management. In most cases, the respondent was the plan’s medical director; in other cases, it was a physician from the plan’s endocrinology department or a representative of the plan’s wellness or quality improvement department. We did not attempt to independently verify the responses to our questions. The 88 HMOs reported a wide range of diabetes management efforts, encompassing more than 30 different initiatives. Their efforts predominantly focused on educating patients about self-management and providers about recommended services. Many of the HMOs used similar strategies for improving care. (See table II.2.) Number of HMOs responding “yes” Does your plan occasionally include information about diabetes in regular newsletters mailed to all enrollees? Does your plan provide (diabetes-related) information to physicians through newsletters or mailings to physicians? Does your plan have health professionals, such as diabetes educators, nutritionists, or diabetes nurses, available for enrollee education? Does your plan have any policies or procedures that are used to guide physicians’ treatment of diabetic enrollees, such as guidelines, practice parameters, or information briefs? Does your plan maintain a list or registry of your enrollees with type II diabetes? Does your plan use case managers to monitor the medical care that your diabetic enrollees receive? Has your plan set performance goals for diabetes care? Does your plan mail educational newsletters or pamphlets about diabetes care to your diabetics? Does your plan operate any type of program designed to consolidate services for diabetics? Does your plan have a computer system that generates reminders for physicians when specific patients are due for specific services? Can you estimate about what proportion of all your Medicare enrollees have type II diabetes? In general, we did not find a strong association between the use of particular approaches to diabetes management and specific HMO characteristics, such as model type, tax status (for profit or not for profit), or size. (See tables II.3 and II.4.) However, for-profit HMOs reported slightly higher use of several diabetes management approaches than not-for-profit HMOs. These included use of diabetes registries, mailings to enrollees with diabetes, and employment of diabetes-related health professionals, such as certified diabetes educators or nutritionists. Similarly, HMOs with the most experience as Medicare contractors—either in Medicare enrollment or in length of Medicare contract—were more likely to use certain diabetes management approaches, such as clinical practice guidelines, mailings to physicians and enrollees, and a diabetes registry. Table II.3: Diabetes Interventions Reported by HMOs (Percent) Table II.4: HMOs Efforts to Monitor Recommended Services by Plan Characteristic (Percent) Rosamond Katz, Assistant Director, (202) 512-7148 Ellen M. Smith, Evaluator-in-Charge Jennifer Grover, Evaluator Stan Stenersen, Evaluator Evan Stoll, Programmer Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed how well the health care system provides preventive services to Medicare beneficiaries with diabetes, focusing on: (1) the extent to which Medicare beneficiaries with diabetes receive recommended levels of preventive and monitoring services; (2) what health maintenance organizations (HMO) that serve Medicare beneficiaries are doing to improve delivery of recommended diabetes services; and (3) what activities the Health Care Financing Administration (HCFA) supports to address these service needs for Medicare beneficiaries with diabetes. GAO noted that: (1) although experts agree that regular use of preventive and monitoring services can help minimize the complications of diabetes, most Medicare beneficiaries with diabetes do not receive these services at recommended intervals; (2) more than 90 percent of fee-for-service Medicare beneficiaries with diabetes visited their physicians at least twice in 1994; (3) however, only about 40 percent received an annual eye exam, and only about 20 percent received the recommended two specialized blood tests per year to monitor diabetes control; (4) on the whole, these fee-for-service utilization rates did not vary substantially by patient age, sex, or race; (5) the provision of preventive and monitoring services under managed care is also below recommended levels, although data for this service delivery approach are limited; (6) for example, among people with diabetes aged 18 to 64 who were enrolled in private HMO plans, less than half received an eye exam in 1995; (7) according to diabetes experts, several factors may contribute to low use of monitoring services, including physicians' lack of awareness of the latest recommendations and patients' lack of motivation to maintain adequate self-management care; (8) Medicare HMO efforts to improve diabetes care have been varied, but generally limited; (9) most plans report that they have focused on educating their enrollees with diabetes about self-management and their physicians about the need for preventive and monitoring services; (10) some HMOs have begun to take additional steps, such as tracking the degree to which physicians provide preventive care, and a few plans have developed comprehensive diabetes management programs; (11) because virtually all of these efforts have begun within the past 3 years, little is known about their effectiveness; (12) HCFA also has begun to test preventive care initiatives for diabetes and has targeted this area for special emphasis; (13) its efforts include helping to plan a nationwide diabetes education program, encouraging local experiments to increase use of monitoring services and improve quality of care for people with diabetes, and developing performance measures for providers of diabetes care; (14) but like the efforts of Medicare HMOs, HCFA's initiatives are quite recent, and the agency does not yet have results that would allow it to evaluate effectiveness; and (15) to the extent that these initiatives prove cost-effective, they may help promote better management of diabetes care.
As part of our audit of the fiscal years 2002 and 2001 CFS, we evaluated Treasury’s financial reporting procedures and related internal control. In our report, which is included in the fiscal year 2002 Financial Report of the United States Government,we reported material deficiencies relating to Treasury’s financial reporting procedures and internal control. These material deficiencies contributed to our disclaimer of opinion on the CFS and also constitute material weaknesses in internal control, which contributed to our adverse opinion on internal control. We performed sufficient audit work to provide the disclaimer of opinion and issued our audit report, dated March 20, 2003, in accordance with U.S. generally accepted government auditing standards. This report is based on the audit work we performed for the fiscal years 2002 and 2001 CFS. We requested comments on a draft of this report from the Secretary of the Treasury and the Director of OMB or their designees. Treasury’s and OMB’s comments are reprinted in appendix II, discussed in the Agency Comments and Our Evaluation section of this report, and incorporated in the report as applicable. Treasury’s current process for compiling the CFS does not directly link information from federal agencies’ audited financial statements to amounts reported in the CFS, and therefore cannot fully ensure that the information in the CFS is consistent with the underlying information in federal agencies’ audited financial statements and other financial data (see fig. 1). Treasury, as the preparer of the CFS, currently collects approximately 2,400 trial balances through the Federal Agencies’ Centralized Trial Balance System (FACTS I) from federal agencies and information from the Treasury Central Accounting and Reporting System (STAR) to compile the financial statements. The Federal Accounting Standards Advisory Board’s (FASAB) Statement of Federal Financial Accounting Concepts No. 4, Intended Audience and Qualitative Characteristics for the Consolidated Financial Report of the United States Government, states that the consolidated financial report should be a general purpose report that is aggregated from agency reports and that it should tell users where to find information in other formats, both aggregated and disaggregated, such as individual agency reports, agency websites, and the President’s Budget. Without directly linking financial information from agencies’ audited financial statements, the information in the CFS may not be reliable. The lack of direct linkage also affects the efficiency and effectiveness of the audit of the CFS. In addition, the reliability of certain information in Management’s Discussion and Analysis, Stewardship Information, and Supplemental Information may be affected. As Treasury is designing its new compilation process, which it expects to implement beginning with the fiscal year 2004 CFS, we recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to design the new compilation process to directly link information from federal agencies’ audited financial statements to amounts reported in all the applicable CFS and related footnotes, and consider the other applicable recommendations in this report when designing and implementing the new compilation process. We identified specific areas of internal control in Treasury’s process for preparing the CFS that need to be strengthened. Internal control should provide, among other things, reasonable assurance that financial reporting is reliable. GAO’s Standards for Internal Control in the Federal Governmentdefines the minimum level of quality acceptable for internal control in the federal government and provides the standards against which internal control is to be evaluated. These standards state that internal controls should include, among other items, (1) segregation of duties, (2) appropriate documentation of transactions and internal control, and (3) reviews by management at the functional or activity level. We found many controls in place, but we identified three areas that need to be improved. Although Treasury is developing a new system and procedures for preparing the CFS, the need for adequate internal control remains important and needs to be considered during the development process. Segregation of duties is the practice of dividing the steps in a critical function among different individuals in order to reduce the risk of error or fraud, thus preventing a single individual from having full control of a transaction or event. FACTS I and the Financial Management Service’s Hyperion database are used to compile the CFS. We found that Treasury’s systems administrators responsible for processing the FACTS I data have the capability to enter, change, and delete data within FACTS I and the Hyperion database without any supervisory review. They are also able to post adjustments to the CFS without formal approval. Lack of proper segregation of duties for critical functions leaves the CFS vulnerable to errors and could result in incomplete and inaccurate summarization of data within these financial statements. While Treasury has documented some portions of its process for compiling the CFS, it has not fully documented its policies and procedures for preparing the CFS report. Agency management is responsible for developing detailed policies, procedures, and practices to fit agency operations and ensuring that internal control is built into and is an integral part of operations. Although GAO’s Standards for Internal Control in the Federal Government calls for clear documentation of policies and procedures, we found that Treasury has not fully implemented this key control activity. Without documented policies and procedures, staff could follow inconsistent standards and practices or not follow them at all. This potential for inconsistency increases the risk that errors in the compilation process could go undetected and could result in an incomplete and inaccurate summarization of data within the CFS, creating a financial report that is not an accurate representation of the financial position of the U.S. government. We found that Treasury management did not review transactions within several key compilation processes. Transactions and other significant events should be authorized and executed only by persons acting within the scope of their authority. Appropriate reviews by management of key decisions and data are vital controls to ensure that only authorized actions occur. For example, Treasury’s FACTS I system allows for master appropriation files, the files that list all federal agencies by appropriation code, to be updated by review accountants without supervisory approval. Also, there is no requirement for supervisory review of changes made to agency data as a result of issues identified during the “agency data analysis process” performed by Treasury. In some instances, supervisory reviews were required, but any reviews that may have been performed were not documented. For example, there was no documentation of supervisory review of changes to the Hyperion system software and chart of accounts used to compile the data for the CFS. Records of changes and reviews of the changes made to the templates used to create the CFS were also inadequate. Inadequate supervisory review and inadequate documentation of changes and reviews could allow data that go into the CFS to be manipulated or changed without any supervisory control or review, resulting in the possibility that agency data could be changed or incorrectly compiled in the CFS. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, in connection with Treasury’s current compilation process and the development of Treasury’s new compilation system and process, to segregate the duties of individuals who have the capability to enter, change, and delete data within FACTS I and the Hyperion database and post adjustments to the CFS; develop and fully document policies and procedures for the consolidated financial statement preparation process so that they are proper, complete, and consistently applied among staff members; and require and document reviews by management of all procedures that result in data changes to the CFS. The net position reported in the CFS is derived by subtracting liabilities from assets, rather than through balanced accounting entries. In other words, the CFS is “plugged” to make it balance. To make the fiscal year 2002 CFS balance, Treasury recorded a net $17.1 billion decrease to net operating cost on the Statement of Operations and Changes in Net Position, which it labeled “Unreconciled Transactions Affecting the Change in Net Position.” Treasury does not identify and quantify all components of this unreconciled activity. Treasury attributes these net unreconciled transaction amounts to (1) improper recording of intragovernmental transactions by federal agencies, (2) transactions affecting assets and liabilities not being identified properly by federal agencies as prior period adjustments, and (3) timing differences and errors in reporting transactions. Treasury stated in its November 2001 report on its CFS improvement project that in order to properly reconcile net position, federal agencies would need to split net position between intragovernmental and public components, including ending balances and the year’s activity. Currently, OMB requires federal agencies to identify intragovernmental assets and liabilities on their audited balance sheets but does not require the intragovernmental portion of net position to be identified. Without a process in place to identify and quantify all components of the activity in the net position line item, revenues, costs, assets, and liabilities may be misstated, thereby affecting the reliability of the CFS. As Treasury is designing its new financial statement compilation process to begin with the fiscal year 2004 CFS, we recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to develop reconciliation procedures which will aid in understanding and controlling the net position balance as well as eliminate the plugs previously associated with compiling the CFS; and use balanced accounting entries to account for the change in net position rather than simple subtraction of liabilities from assets. Federal agencies are unable to fully reconcile intragovernmental activity and balances. OMB and Treasury require CFO Act agencies to reconcile selected intragovernmental activity and balances with their “trading partners”and to report on the extent and results of intragovernmental activity and balances reconciliation efforts. The Inspectors General reviewed these reports and communicated the results of their reviews to OMB, Treasury, and us. A substantial number of the CFO Act agencies did not fully perform the required reconciliations for fiscal year 2002, citing reasons such as (1) failure of trading partners to provide needed data, (2) limitations and incompatibility of agency and trading partner systems, and (3) human resource issues. For fiscal year 2002, amounts reported for federal agency trading partners for certain intragovernmental accounts were significantly out of balance. A lack of standardization in transaction processing and a lack of sufficient communication between trading partners contribute significantly to federal agencies’ inability to fully reconcile intragovernmental activity and balances. Without improvement in this area, Treasury cannot properly eliminate intragovernmental activity and balances and, as a result, assets, liabilities, revenue, and costs reported in the CFS may not be fairly stated. Federal agencies are required to consistently and fully account for, reconcile, and report intragovernmental activity and balances across the federal government. To address certain issues that have contributed to the out-of-balance condition for intragovernmental activity and balances, OMB has established a set of standard business rules, OMB Memorandum M-03-01, Business Rules for Intragovernmental Transactions, for governmentwide transactions among trading partners; the memorandum requires quarterly reconciliations of intragovernmental activity and balances, beginning with fiscal year 2003. Treasury Financial Manual, section 4030, also requires reconciliation of intragovernmental activity and balances. Further, Treasury has begun a process to help federal agencies better perform their reconciliations, by providing each agency with detailed trading partner information. Also, Treasury is planning to require federal agencies, beginning with fiscal year 2004, to report in Treasury’s new closing package intragovernmental activity and balances by trading partner. As OMB continues to make strides to address issues related to intragovernmental transactions, we recommend that the Director of the Office of Management and Budget direct the Controller of the Office of Federal Financial Management to develop policies and procedures that document how OMB will enforce the business rules provided in OMB Memorandum M-03-01, Business Rules for Intragovernmental Transactions, and require that significant differences noted between business partners be resolved and the resolution be documented. We also recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, working in coordination with the Controller of the Office of Management and Budget, to implement the plan to require federal agencies to report in Treasury’s new closing package, beginning with fiscal year 2004, intragovernmental activity and balances by trading partner and indicate amounts that have not been reconciled with trading partners and amounts, if any, that are in dispute. During our audits, we found the following: Intragovernmental activity and balances are “dropped” or “offset” in the preparation of the CFS rather than eliminated through balanced accounting entries. Certain intragovernmental activity and balances, primarily related to appropriations, are not being properly considered in the consolidation process. No reconciliation is performed for the change in intragovernmental assets and liabilities for the fiscal year, including the amount and nature of all changes in intragovernmental assets or liabilities not attributable to cost and revenue activity recognized during the fiscal year, such as differences due to purchases that are capitalized as inventory or equipment and revenue that is deferred. Consolidated financial statements are intended to present the results of operations and financial position of the components that make up the reporting entity as if the entity were a single enterprise. Therefore, when preparing the CFS, intragovernmental activity and balances between federal agencies must be eliminated. As mentioned above, federal agencies’ problems in handling their intragovernmental transactions impair Treasury’s ability to properly eliminate these transactions, and significant differences in intragovernmental accounts have been identified. Without an effective process, intragovernmental activity and balances are not fully accounted for and eliminated in the process used to prepare the CFS. As a result, the federal government’s ability to determine the impact of these differences on the amounts reported in the CFS is impaired and, consequently, the CFS may be misstated. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, working in coordination with OMB’s Controller of the Office of Federal Financial Management, to design procedures that will account for the difference in intragovernmental assets and liabilities throughout the compilation process by means of formal consolidating and elimination accounting entries; develop solutions for intragovernmental activity and balance issues relating to federal agencies’ accounting, reconciling, and reporting in areas other than those OMB now requires be reconciled, primarily areas relating to appropriations; and reconcile the change in intragovernmental assets and liabilities for the fiscal year, including the amount and nature of all changes in intragovernmental assets or liabilities not attributable to cost and revenue activity recognized during the fiscal year. Examples of these differences would include capitalized purchases such as inventory or equipment and deferred revenue. Treasury did not have an adequate process to identify and report items needed to reconcile the U.S. government’s fiscal year 2002 net operating cost of $364.9 billion to the fiscal year 2002 unified budget deficit, which was reported as $157.7 billion. The Reconciliation of Net Operating Cost and Unified Budget Surplus (or Deficit) (hereafter referred to as the reconciliation statement) is expected to explain certain differences that occur because the CFS are prepared on the accrual basis in accordance with U.S. generally accepted accounting principles. Under accrual accounting, transactions are reported when the event or transaction is recognizable under U.S. generally accepted accounting principles rather than when cash is received and paid. By contrast, federal budgetary reporting is, with certain exceptions, on the cash basis, in accordance with accepted budget concepts and policies. Statement of Federal Financial Accounting Standards (SFFAS) No. 24, Selected Standards for the Consolidated Financial Report of the United States Government, effective in fiscal year 2002, requires the reconciliation statement as part of the CFS. In our audit of the reconciliation statement, we found that Treasury was unable to identify all the transactions needed to properly reconcile the statement. Treasury’s process for compiling the reconciliation statement involved the use of two independent sources of information—FACTS data from federal agencies’ general ledger systems for the net operating cost and most of the reconciliation statement items and Treasury’s central accounting and reporting system (STAR) primarily for the unified budget surplus/deficit amounts. The reconciliation statement begins with the net operating cost amount reported in the Statement of Operations and Changes in Net Position (derived through FACTS data). As noted above, this amount includes a net $17.1 billion labeled as “unreconciled transactions,” which was needed to balance the consolidated Balance Sheet. Because the net operating cost amount includes this plug, which does not correspond to any budget activity, the $17.1 billion should have been included as a reconciling item in the reconciliation statement, but it was not. In addition, a $1 billion “net amount of all other differences” (another plug) was also needed in the reconciliation statement to balance net operating cost to the unified budget deficit. Treasury was unable to adequately identify and explain the gross components of such amounts. Treasury’s process for preparing the reconciliation statement also did not ensure completeness of reporting or ascertain the consistency of all the amounts reported in the reconciliation statement with the related balance sheet line items, related notes, or federal agency financial statements. We performed an analysis to determine whether all applicable components reported in the other statements (and related note disclosures) included in the CFS were properly reflected in the reconciliation statement. We found about $21 billion of net changes in various line item account balances on the balance sheet that were not explained on either the reconciliation statement or the Statement of Changes in Cash Balance from Unified Budget Surplus and Other Activities. For example, the reconciliation statement reported depreciation expense ($20.5 billion) and total capitalized fixed assets ($40.9 billion) as the components of the net change in property, plant, and equipment. Although these activities accounted for a net increase of $20.4 billion, the balance sheet reflected a smaller net increase, $18 billion; Treasury was unable to explain the remaining $2.4 billion of the net change. In addition, while we found that the source of the line item “principal repayments of precredit reform loans” that is reported on the reconciliation statement was from STAR, Treasury was unable to link this amount of $8.2 billion to any related agency financial statements or the consolidated Balance Sheet and related notes. Lastly, Treasury did not establish a reporting materiality threshold for purposes of collecting and reporting information in the reconciliation statement. For example, some items were reported simply as a net “increase/decrease” without considering how material, both quantitatively and qualitatively, the gross changes were. We noted, for instance, that in the “components of the budget surplus (deficit) not part of net operation cost” section of the statement, there is a reconciling item titled “increase in inventory” rather than accounting for “purchases of inventory” as a “component of the budget surplus (deficit) not part of net operation cost” and separately reporting the “sales, use, or disposal of inventory” in the “components of net operating cost not part of the budget surplus (or deficit).” Treasury was unable to demonstrate whether material, informative amounts were netted, and pertinent information may therefore not be disclosed. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to develop and implement a process that adequately identifies and reports items needed to reconcile its net operating cost and unified budget surplus (or deficit). Treasury should report “net unreconciled differences” included in the net operating results line item as a separate reconciling activity in the reconciliation statement, develop policies and procedures to ensure completeness of reporting and document how all the applicable components reported in the other consolidated financial statements (and related note disclosures included in the CFS) were properly reflected in the reconciliation statement, and establish reporting materiality thresholds for determining which agency financial statement activities to collect and report at the governmentwide level to assist in ensuring that the reconciliation statement is useful and conveys meaningful information. In addition, if Treasury chooses to continue using information both from federal agencies’ financial statements and from the STAR system, we recommend that Treasury demonstrate how the amounts from STAR reconcile to federal agencies’ identify and document the cause, if any significant differences are noted. Treasury was unable to demonstrate how significant amounts reported in the Statement of Changes in Cash Balance from Unified Budget and Other Activities were related to the underlying federal agencies’ financial statements. The Statement of Changes in Cash Balance from Unified Budget and Other Activities is expected to explain how the annual unified budget surplus or deficit relates to the change in the U.S. government’s operating cash. SFFAS No. 24, effective in fiscal year 2002, requires the Statement of Changes in Cash Balance from Unified Budget and Other Activities as part of the CFS. For fiscal year 2002, the Statement of Changes in Cash Balance from Unified Budget and Other Activities reported a unified budget deficit of $157.7 billion, derived as the difference between reported actual unified budget receipts of $1,853.3 billion and actual unified budget outlays of $2,011 billion. Both line items were material to this statement and were compiled from federal agencies’ monthly reports to Treasury in the STAR system. Treasury was unable to explain material differences, totaling $231 billion (absolute) and $166 billion (net), between the actual unified budget net outlays reported on this statement and the outlays reported on selected individual federal agencies’ audited Combined Statement of Budgetary Resources. For example, we found one federal agency that reported net outlays for fiscal year 2002 as $479 billion on its audited Combined Statement of Budgetary Resources, while Treasury’s records showed $375 billion for fiscal year 2002 for this agency. This agency had received an unqualified auditor opinion on its financial statements. OMB Bulletin 01-09, Form and Content of Agency Financial Statements, states that outlays in federal agencies’ Combined Statement of Budgetary Resources should agree with the net outlays reported in the budget of the U.S. government. In addition, SFFAS No. 7, Accounting for Revenue and Other Financing Sources and Concepts for Reconciling Budgetary and Financial Accounting, requires explanation of any material differences between the information required to be disclosed (including outlays) and the amounts described as “actual” in the budget of the U.S. government. Treasury believes its records for net outlays are reliable and accurate; however, many federal agencies are reporting different net outlays and receiving clean opinions on their financial statements. Treasury was unable to adequately explain the over $24 billion net difference between actual unified budget receipts of $1,853.3 billion and total operating revenue of $1,877.7 billion reported in the Statements of Operations and Changes in Net Position. While these amounts are not expected to equal (for example, operating revenues include accrued amounts, and budget receipts are reported on the cash basis), there is a relationship between operating revenues reported on the Statement of Operations and Changes in Net Position and unified budget receipts reported on the Statement of Changes in Cash Balance from Unified Budget and Other Activities. Therefore, the expectation is that differences between these amounts should be explainable. Treasury was also not able to provide support for how the line items in the “other activities” section of this statement, totaling $13.5 billion, related to either the underlying Balance Sheet or related notes accompanying the CFS. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to develop and implement a process to ensure that the Statement of Changes in Cash Balance from Unified Budget and Other Activities properly reflects the activities reported in federal agencies’ audited financial statements. Treasury should document the consistency of the significant line items on this statement to agencies’ audited financial statements; request, through its closing package, that federal agencies provide the net outlays reported in their Combined Statement of Budgetary Resources and explanations for any significant differences between net outlay amounts reported in the Combined Statement of Budgetary Resources and the budget of the U.S. government; investigate the differences between net outlays reported in federal agencies’ Combined Statement of Budgetary Resources and Treasury’s records in the STAR system to ensure that the proper amounts are reported in the Statement of Changes in Cash Balance from Unified Budget and Other Activities; explain and document the differences between the operating revenue amount reported on the Statement of Operations and Changes in Net Position and unified budget receipts reported on the Statement of Changes in Cash Balance from Unified Budget and Other Activities; and provide support for how the line items in the “other activities” section of this statement relate to either the underlying Balance Sheet or related notes accompanying the CFS. The CFS includes certain financial information for the executive, legislative, and judicial branches, to the extent that federal agencies within those branches have provided Treasury such information. However, there are undetermined amounts of assets, liabilities, and revenues that are not included, and the government did not provide evidence or disclose in the CFS that such financial information was immaterial. Statement of Federal Financial Accounting Concepts (SFFAC) No. 2, Entity and Display, provides guidance on defining reporting entities. Under SFFAC No. 2, a reporting entity for general purpose financial statements would “meet all of the following criteria: (1) there is a management responsible for controlling and deploying resources, producing outputs and outcomes, executing the budget or a portion thereof . . ., and held accountable for the entity’s performance; (2) the entity’s scope is such that its financial statements would provide a meaningful representation of operations and financial condition; and (3) there are likely to be users of the financial statements who are interested in and could use the information in the statements to help them make resource allocation and other decisions and hold the entity accountable for its deployment and use of resources.” SFFAC No. 2 also calls for the notes to financial statements to provide disclosures that are necessary to make the financial statements more informative and not misleading, such as a brief description of the reporting entity. The statement also provides criteria for including components in a reporting entity. As examples of the application of such criteria, SFFAC No. 2 specifically discusses the Federal Reserve System and government-sponsored enterprises and the reasons for FASAB’s conclusion that these entities would not be considered components of the U.S. government reporting entity. In accordance with SFFAC No. 2, if the government could provide evidence that the financial information not included in the CFS is immaterial, then the CFS reporting entity could be described as the “U.S. government” and would conform materially to the criteria set forth in SFFAC No. 2. However, the fiscal year 2002 CFS reporting entity excluded certain entities without providing evidence or clearly explaining the reason. An appendix to the CFS listed 13 entities that were excluded from the CFS reporting entity and specifically explained the reason for excluding one of those entities—the Federal Reserve System. However, the appendix did not explain the reason for excluding the other entities listed as excluded, such as government-sponsored enterprises and military exchanges. While exclusion of those entities may be appropriate, some users of the CFS may be confused if the reason for excluding entities is not clearly disclosed in the CFS. We understand the inherent challenges in getting complete information for all three branches of the U.S. government. However, not including required information for all components included in a reporting entity or not clearly explaining the reason for excluding certain entities could mislead some users of the financial statements. Without evidence of the amounts of information excluded and any related disclosures, in particular, evidence that what was excluded was immaterial to the CFS, we could not have ample assurance regarding the unknown amounts, and, under auditing standards, this issue could impede a future opinion on the CFS. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management to do the following: Perform an assessment to define the reporting entity, including its specific components, in conformity with the criteria issued by FASAB. Key decisions made in this assessment should be documented, including the reason for including or excluding components and the basis for concluding on any issue. Particular emphasis should be placed on demonstrating that any financial information that should be included, but is not included, is immaterial. Provide in the financial statements all the financial information relevant to the defined reporting entity, in all material respects. Such information would include, for example, the reporting entity’s assets, liabilities, and revenues. Disclose in the financial statements all information that is necessary to inform users adequately about the reporting entity. Such disclosures should clearly describe the reporting entity and explain the reason for excluding any components that are not included in the defined reporting entity. Treasury lacks an adequate process to ensure that the financial statements, related notes, stewardship, and supplemental information in the CFS are presented in conformity with U.S. generally accepted accounting principles. SFFAS No. 24 states that FASAB standards apply to all federal agencies, including the U.S. government as a whole, unless provision is made for different accounting treatment in a current or subsequent standard. Specifically, we found that Treasury did not (1) timely identify applicable generally accepted accounting principles requirements, (2) make timely modifications to agency data calls to obtain information needed, (3) assess, qualitatively and quantitatively, the materiality of omitted disclosures, or (4) document decisions reached with regard to omitted disclosures and the rationale for such decisions. We identified numerous disclosures that were not in conformity with applicable standards. These needed disclosures are described in appendix I. We did note that Treasury is requesting certain information in its planned closing package for fiscal year 2004 that may address some of the needed disclosures. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to establish a formal process that will allow the financial statements, related notes, stewardship, and supplemental information in the CFS to be presented in conformity with U.S. generally accepted accounting principles. The process should timely identify generally accepted accounting principles requirements, make timely modifications to Treasury’s closing package requirements to obtain information needed, assess, qualitatively and quantitatively, the impact of the omitted document decisions reached and the rationale for such decisions. With respect to the 16 required disclosures identified in appendix I that were not included in the CFS, we recommend that each of these disclosures be included in the CFS or the rationale for excluding any of them be documented. During our audit we found certain issues related to (1) management representation letters, (2) legal representation letters, and (3) information on major treaties and other international agreements that will require certain actions by Treasury and OMB. Other issues related to these same three areas will need to be addressed by federal agencies and their auditors to facilitate Treasury’s and OMB’s preparation of the CFS. We plan to separately communicate to agency Chief Financial Officers and Inspectors General the details of our concerns for such issues. We have summarized our findings below and are providing recommendations to help address the issues that require action by Treasury and OMB. For each agency financial statement audit, generally accepted auditing standards require that agency auditors obtain written representations from agency management as part of the audit. In turn, Treasury and OMB are to receive all the required management representation letters and the related summaries of unadjusted misstatements from the federal agencies. This is important because generally accepted auditing standards require Treasury and OMB to provide us, as their auditor, a management representation letter for the CFS, and their letter depends on the information within agencies’ management representation letters. However, we found that Treasury and OMB did not have policies or procedures to adequately review and analyze federal agencies’ management representation letters. In a management representation letter, management typically acknowledges its responsibility for its financial statements and its belief that the financial statements are presented in conformity with U.S. generally accepted accounting principles; the completeness of financial information in the statements; recognition, measurement, and disclosure; and subsequent events. Without performing an adequate review and analysis of federal agencies’ management representations letters, Treasury and OMB management may not be fully informed of matters that may affect their representations made with respect to the audit of the CFS. As part of our audit of the CFS, we received and reviewed 30 federal agencies’ management representation letters. We found that (1) 2 letters had discrepancies between what the auditor found and what the agency represented in its management representation letter, (2) 8 letters were not signed by the appropriate level of management, (3) 25 letters did not disclose the materiality threshold used by management in determining items to be included in the letter, (4) 4 letters omitted certain representations that are ordinarily included, (5) 2 letters did not include a schedule of unadjusted misstatements or affirm in their representation letter that there were no uncorrected misstatements, and (6) 15 schedules of unadjusted misstatements did not provide complete information about the misstatements that were identified. Only 1 of the 30 letters we reviewed had none of the deficiencies noted above. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to establish written policies and procedures for preparing the governmentwide management representation letter to help ensure that it is properly prepared and contains sufficient representations. Specifically, these policies and procedures should require an analysis of the agency management representations to determine if discrepancies exist between what the agency auditor reported and the representations made by the agency, including the resolution of such discrepancies; a determination that the agency management representation letters have been signed by the highest-level agency officials that are responsible for and knowledgeable about the matters included in the agency management representation letters; an assessment of the materiality thresholds used by federal agencies in their respective management representation letters; an assessment of the impact, if any, of federal agencies’ materiality thresholds on the management representations made at the governmentwide level; an evaluation and assessment of the omission of representations ordinarily included in agency management representation letters; and an analysis and aggregation of the agencies’ summary of unadjusted misstatements to determine the completeness of the summaries and to ascertain the materiality, both individually and in the aggregate, of such unadjusted misstatements to the CFS taken as a whole. For each agency financial statement audit, generally accepted auditing standards require that agency auditors obtain written legal representations as part of the audit. Legal representation letters, along with related management schedules, are essential to properly reporting legal contingency losses in federal agencies’ financial statements. Inadequate information in the legal representation letters could weaken the accuracy and reliability of federal agency financial statements and the CFS. We reviewed 34 federal agencies’ legal representations letters and related management schedules to assess the adequacy of the letters and related schedules. We found that the adequacy of some legal letters was questionable. For example, we found that 2 letters did not express an opinion of how the expected outcome of virtually all of the two agencies’ cases would be resolved, and that 5 agencies did not provide the related management schedules. In some cases, the lack of adequate information may have resulted from legal counsel’s desire to protect the confidentiality of lawyer-client communications, the difficulty in predicting the outcome of potential and pending litigation with any assurance, and/or legal counsel’s desire to avoid the possibility of prejudicing the outcome of the litigation to the client’s detriment. While these are understandable reasons, without adequate legal contingency information, management of Treasury and OMB may not be fully informed of matters that may affect the legal representations made with respect to the audit of the CFS. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to help ensure that agencies provide adequate information in their legal representation letters regarding the expected outcome of the cases and related management schedules. The CFS note disclosures did not include any information on major treaties and other international agreements to which the federal government is a party. These treaties and other international agreements address various issues including, but not limited to, trade, commerce, security, and arms that may involve financial obligations or give rise to loss contingencies. Treaties and other international agreements may lead to commitments or contingencies and therefore should be included in the CFS, in accordance with OMB Bulletin No. 01-09 and SFFAS No. 5, Accounting for Liabilities of the Federal Government, as amended by SFFAS No. 12, Recognition of Contingent Liabilities Arising from Litigation. The degree of certainty as to whether there will be a cost now or in the future, along with the ability to quantify it in advance, determines the appropriate accounting treatment. Treaties and other international agreements were not included in the notes to the CFS because Treasury and the federal agencies had yet to perform the necessary work to determine the nature and magnitude of those in force as of September 30, 2002. The State Department publishes a document annually called Treaties in Force. The most recent edition of Treaties in Force, released in August 2002, lists treaties and other international agreements of the United States that were in force on January 1, 2002. However, according to State Department staff, this document is incomplete because federal agencies do not always provide complete information on treaties and international agreements when a request for data is made. Not having information on major treaties and other international agreements in the CFS resulted in incomplete disclosures of the possible exposure to loss or obligations of the U.S. government. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that agencies develop a detailed schedule of all major treaties and other international agreements that obligate the U.S. government to provide cash, goods, or services, or that create other financial arrangements that are contingent on the occurrence or nonoccurrence of future events (a starting point for compiling these data could be the State Department’s Treaties in Force); classify all such scheduled major treaties and other international agreements as commitments or contingencies; disclose in the notes to the CFS amounts for major treaties and other international agreements that have a reasonably possible chance of resulting in a loss or claim as a contingency; disclose in the notes to the CFS amounts for major treaties and other international agreements that are classified as commitments and that may require measurable future financial obligations; and take steps to prevent major treaties and other international agreements that are classified as remote from being recorded or disclosed as probable or reasonably possible in the CFS. In written comments on a draft of this report, which are reprinted in appendix II, Treasury and OMB stated that our report identified many recommendations that will improve the usefulness and accuracy of the CFS and that they have already incorporated many of them into their new system and processes that are being developed for preparing the fiscal year 2004 CFS. However, Treasury and OMB disagreed with our recommendations related to unreconciled transactions affecting net position and the Statement of Changes in Cash Balance from Unified Budget and Other Activities. They also stated that they would consider the other recommendations in our report as they continue the design and implementation of the new process for preparing the CFS. On the first matter, Treasury and OMB disagreed with our proposed recommendation that federal agencies submit to Treasury an analysis of their net position that separates intragovernmental and public transactions. The purpose of this recommendation was to help Treasury understand and control the U.S. government’s net position, as well as to eliminate the plugs associated with compiling the CFS. In response to our draft report, Treasury and OMB stated that Treasury had decided not to require agencies to split net position between intragovernmental and public transactions as Treasury had originally planned and reported in its CFS Improvement Project Report because it was unable to develop a procedure that agencies could use to provide this split. In addition, Treasury and OMB stated that this split would not identify certain items known to affect the unreconciled net position transactions. However, because Treasury has not identified and quantified all the components of the unreconciled transactions, a procedure is still needed that will adequately reconcile net position and assist Treasury in identifying and eliminating the plugs needed to balance the CFS. Our proposed recommendation in the draft report that we provided for comment was one option for Treasury to resolve the uncertainties regarding the reliability of these data. We recognize there are other ways to gain these assurances. Therefore, we have modified our recommendation to recommend that Treasury develop reconciliation procedures to aid in understanding and controlling the net position balance. Regarding the second matter, Treasury and OMB stated that we had suggested that federal agency data be used to prepare receipts and outlays used in the Statement of Changes in Cash Balance from Unified Budget and Other Activities. They stated that they disagree with this approach because it would be time-consuming and costly to gather such information. Treasury and OMB have stated that the Statement of Changes in Cash Balance from Unified Budget and Other Activities is prepared from information derived from Treasury’s Central Accounting System rather than from agencies’ financial statements. We were not calling for Treasury to use federal agencies’ financial statements to prepare the Statement of Changes in Cash Balance from Unified Budget and Other Activities. Instead, we recommended that Treasury collect certain information already reported in federal agencies’ audited financial statements and develop procedures that ensure consistency of the significant line items on the Statement of Changes in Cash Balance from Unified Budget and Other Activities with the agency- reported information. As we stated in our report, Treasury has expressed the belief that the information it maintains in its system is materially reliable. However, federal agencies also believe their amounts are materially reliable and their auditors have rendered unqualified audit opinions on their financial statements. We found unexplained material differences between Treasury’s records and some agencies’ financial statements. We provided a schedule of these differences to Treasury and requested explanations for the material differences. As discussed in our report, Treasury was unable to explain material differences, totaling $231 billion (absolute) and $166 billion (net), between the actual unified budget net outlays reported on this statement and the net outlays reported on selected individual federal agencies’ audited Combined Statement of Budgetary Resources. As stated in our report, OMB Bulletin 01-09, Form and Content of Agency Financial Statements, states that outlays in federal agencies’ Combined Statement of Budgetary Resources should agree with the net outlays reported in the budget of the U.S. government. In some cases, we found that net outlay amounts reported in federal agencies’ audited financial statements differed from the amounts included in the CFS and budget of the U.S. government for these agencies. For example, Treasury did not provide us with an explanation of why its own audited Combined Statement of Budgetary Resources reported net outlays of $479 billion for fiscal year 2002, while the amount included in the CFS relating to net outlays for the Department of Treasury was only $375 billion for fiscal year 2002. Ensuring that the significant line items on the Statement of Changes in Cash Balance from Unified Budget and Other Activities are consistent with agencies’ audited financial statements is an important expectation. As stated in our report, SFFAS No. 7, Accounting for Revenue and Other Financing Sources and Concepts for Reconciling Budgetary and Financial Accounting, requires agencies to provide an explanation for any material differences between the information required to be disclosed (including outlays) in their financial statements and the amounts described as “actual” in the budget of the U.S. government. Also, many of the amounts reported in the Statement of Changes in Cash Balance from Unified Budget and Other Activities are intended to be the same as the amounts reported in the budget of the U.S. government. As such, we continue to believe that the process we proposed would be the most efficient manner for Treasury, as the preparer of the CFS, to obtain the necessary assurance on the significant amounts reported in the Statement of Changes in Cash Balance from Unified Budget and Other Activities. Treasury and OMB also suggested that we not address the recommendations in our report related to management representation letter and legal representation letter issues to Treasury. Generally accepted auditing standards require Treasury and OMB to provide us, as their auditor, a management representation letter for the CFS, and their letter depends on the information within agencies’ management representation letters. However, we found that Treasury and OMB did not have policies or procedures to adequately review and analyze federal agencies’ management representation letters. As such, we continue to believe that both Treasury and OMB need to work together to address the recommendations we made in this area. In regard to legal representation letters, we identified problems with certain agencies’ letters that could weaken the accuracy and reliability of federal agencies’ financial statements and the CFS. OMB, in its role of providing guidance to agencies and their auditors regarding agencywide financial statements, and Treasury, in its role as preparer of the CFS, both play an important part in ensuring that legal representation letters provide adequate information to enable the proper reporting of legal contingency losses in federal financial statements. As such, we continue to believe that both Treasury and OMB need to work together to address the recommendations we made in this area as well. This report contains recommendations to you. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations. You should submit your statement to the Senate Committee on Governmental Affairs and the House Committee on Government Reform within 60 days of the date of this letter. A written statement must also be sent to the House and Senate Committees on Appropriations with the agencies’ first request for appropriations made more than 60 days after the date of the report. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs; the Subcommittee on Financial Management, the Budget, and International Security, Senate Committee on Governmental Affairs; the House Committee on Government Reform; and the Subcommittee on Government Efficiency and Financial Management, House Committee on Government Reform. In addition, we are sending copies to the Fiscal Assistant Secretary of the Treasury and OMB’s Controller of the Office of Federal Financial Management. Copies will be made available to others upon request. This report is also available at no charge on GAO’s Web site, at www.gao.gov. We acknowledge and appreciate the cooperation and assistance provided by Treasury and OMB during our audit. If you or your staff have any questions or wish to discuss this report, please contact Jeffrey C. Steinhoff, Managing Director, Financial Management and Assurance, on (202) 512- 2600 or Gary T. Engel, Director, Financial Management and Assurance, on (202) 512-3406. This enclosure includes 16 disclosures identified that are required by U.S. generally accepted accounting principles to either be included in the CFS or the rationale for their exclusion documented. However, they were neither included nor was their exclusion documented. The note disclosure for loans receivable and loan guarantee liabilities departed from the following disclosure requirements of Statements of Federal Financial Accounting Standards (SFFAS) No. 3, Accounting for Inventory and Related Property, and SFFAS No. 18, Amendments to Accounting Standards for Direct Loans and Loan Guarantees. SFFAS No. 3, paragraph 91, requires the reporting entity to disclose the following: valuation basis for foreclosed property; changes from the prior year’s accounting methods, if any; restrictions on the use/disposal of property; balances by categories (i.e., pre-1992 and post-1991 foreclosed number of properties held and average holding period by type or number of properties for which foreclosure proceedings are in process at the end of the period for foreclosed assets acquired in full or partial settlement of a direct or guaranteed loan. SFFAS No. 18, paragraph 9, states that credit programs should reestimate the subsidy cost allowance for outstanding direct loans and the liability for outstanding loan guarantees. There are two kinds of reestimates: (a) interest rate reestimates and (b) technical/default reestimates. Entities should measure and disclose each program’s reestimates in these two components separately. SFFAS No. 18, paragraph 10, requires the reporting entity to display in the notes to the financial statements a reconciliation between the beginning and ending balances of the subsidy cost allowance for outstanding direct loans and the liability for outstanding loan guarantees reported in the entity’s balance sheet. SFFAS No. 18, paragraph 11, requires disclosure of the total amount of direct or guaranteed loans disbursed for the current reporting year and the preceding reporting year; the subsidy expense by components, recognized for the direct or guaranteed loans disbursed in those years; and the subsidy reestimates by components for those years. SFFAS No. 18, paragraph 11, also requires disclosure, at the program level, of the subsidy rates for the total subsidy cost and its components for the interest subsidy costs, default costs (net of recoveries), fees and other collections, and other costs estimated for direct loans and loan guarantees in the current year’s budget for the current year’s cohorts. SFFAS No. 18, paragraph 11, further requires the reporting entity to disclose, discuss, and explain events and changes in economic conditions, other risk factors, legislation, credit policies, and subsidy estimation methodologies and assumptions that have had a significant and measurable effect on subsidy rates, subsidy expense, and subsidy reestimates. The note disclosure for inventories and related property departed from the following disclosure requirements of SFFAS No. 3, Accounting for Inventory and Related Property. When inventory or operating materials and supplies are declared excess, obsolete, or unserviceable, SFFAS No. 3, paragraph 30, requires the difference between the carrying amount and the expected net realizable value to be recognized as a loss or gain and either separately reported or disclosed. Paragraphs 35 and 50 require the following disclosures about inventory and operating materials and supplies: general composition; changes from the prior year’s accounting methods, if any; restrictions on the sale of inventory and the use of operating materials changes in the criteria for categorizing inventory and operating materials and supplies. Paragraph 56 requires the following disclosures about stockpile material: basis for valuing stockpile material, including valuation method and any changes from the prior year’s accounting methods, if any; restrictions on the use of stockpile material; balances in each category of stockpile material (i.e., stockpile material held and held for sale); criteria for grouping stockpile material held for sale; and changes in criteria for categorizing stockpile material held for sale. Paragraph 55 requires the disclosure of any difference between the carrying amount (i.e., purchase price or cost) of stockpile material held for sale and the estimated selling price of such assets. Paragraph 66 requires the following disclosures about seized property: changes from the prior year’s accounting methods, if any; and analysis of change in seized property (including dollar value and number of seized properties) that are on hand at the beginning of the year, seized during the year, disposed of during the year, and on hand at the end of the year, as well as known liens or other claims against the property. This information should be presented by type of seizure and method of disposition when material. Paragraph 78 requires the following disclosures about forfeited property: analysis of the changes in forfeited property by type and dollar amount that includes (1) number of forfeitures on hand at the beginning of the year, (2) additions, (3) disposals and method of disposition, and (4) end- of-year balances; restriction on the use of disposition of the property; and if available, an estimate of the value of property to be distributed to other federal, state, and local agencies in future reporting periods. Paragraph 98 requires that if a contingent loss is not recognized because it is less than probable or it is not reasonably measurable, then disclosure of the contingency shall be made if it is at least reasonably possible that a loss may occur. Paragraph 109 requires the following disclosures for goods held under price support and stabilization programs: basis for valuing commodities, including valuation method and cost flow changes from the prior year’s accounting methods; restrictions on the use, disposal, or sale of commodities; and analysis of the change in dollar amount and volume of commodities, including those (1) on hand at the beginning of the year, (2) acquired during the year, (3) disposed of during the year by method of disposition, (4) on hand at the end of the year, (5) on hand at year-end and estimated to be donated or transferred during the coming period, and (6) received as a result of surrender of collateral related to nonrecourse loans outstanding. The analysis should also show the dollar value and volume of purchase agreement commitments. The note disclosure for property, plant, and equipment (PP&E) departed from the following disclosure requirements of SFFAS No. 6, Accounting for Property, Plant, and Equipment; SFFAS No. 10, Accounting for Internal Use Software; and SFFAS No. 16, Amendments to Accounting for Property, Plant, and Equipment: SFFAS No. 6, paragraph 45, states that the following disclosures should be included: the estimated useful lives for each major class; capitalization thresholds, including any changes in thresholds during the restrictions on the use or convertibility of general PP&E. SFFAS No. 10, paragraph 35, requires the following disclosures for internal use software: the cost, associated amortization, and book value; the estimated useful life for each major class of software; and the method of amortization. SFFAS No. 16, paragraph 9, requires an appropriate PP&E note disclosure to explain that “physical quantity” information for the multiuse heritage assets is included in supplemental stewardship reporting for heritage assets. The note disclosure for federal employee and veteran benefits payable was not complete and properly reported because the liability for military pensions and the note disclosure related to the “change in actuarial accrued pension liability and components of related expenses” for the military retirement fund do not agree with information presented in the Department of Defense’s (DOD) financial statements. The note disclosure included in the CFS does not include a line for the valuation of plan amendments that occurred during the year. DOD correctly reported plan amendments separately in its financial statements; however, the mechanism was not available through FACTS submission for DOD to report plan amendments separately to the Department of the Treasury. The note disclosure for environmental and disposal liabilities departed from the requirements of SFFAS No. 6 in two instances. The note disclosure on environmental liabilities was not complete and properly reported primarily because DOD was unable to fully implement elements of U.S. generally accepted accounting principles and OMB guidance. Specifically, the disclosures should do the following: Estimate and recognize cleanup costs associated with general PP&E at the time the PP&E is placed in service. In addition, a liability should be recognized for the portion of the estimated total cleanup cost that is attributable to that portion of the physical capacity used or that portion of the estimated useful life that has passed since the general PP&E was placed in service. As Treasury indicated in its note disclosures, DOD was unable to fully implement these two elements of U.S. generally accepted accounting principles. However, the note disclosure did not explain how these limitations prevented DOD from properly estimating its environmental liability. Linking the environmental liability to weaknesses in the DOD property, plant, and equipment systems would have made the CFS more useful to the reader. Include material changes in total estimated cleanup costs due to changes in laws, technology, or plans. When preparing the CFS, Treasury should consider whether the reader would be interested in understanding why the liability changed and include the explanation in the note disclosure. Financial Accounting Standards Board, Statement of Financial Accounting Standards (SFAS) No. 13, Accounting for Leases, paragraph 16, requires the following disclosures on capital leases: future minimum lease payments as of the date of the latest balance sheet presented, in the aggregate and for each of the 5 succeeding fiscal years, with separate deductions from the total for the amount representing executory costs, including any profit thereon, included in the minimum lease payments, and for the amount of the imputed interest necessary to reduce the net minimum lease payments to present value; a summary of assets under capital lease by major asset category and the related total accumulated amortization; and a general description of the lessee’s leasing arrangements, including but not limited to (1) the basis on which contingent rental payments are determined, (2) the existence and terms of renewal or purchase options and escalation clauses, and (3) restrictions imposed by lease agreements, such as those concerning dividends, additional debt, and further leasing. The note disclosure for other liabilities departed from the following disclosure requirements of SFFAS No. 5, Accounting for Liabilities of the Federal Government, with respect to life insurance liabilities: Paragraph 117 states that all federal reporting entities with whole life insurance programs should follow the standards as prescribed in the private sector standards when reporting the liability for future policy benefits. The applicable private sector standards are SFAS No. 60, Accounting and Reporting by Insurance Enterprises; SFAS No. 97, Accounting and Reporting by Insurance Enterprises for Certain Long- Duration Contracts and for Realized Gains and Losses from the Sale of Investments; and SFAS No. 120, Accounting and Reporting by Mutual Life Insurance Enterprises and by Insurance Enterprises for Certain Long-Duration Participating Contracts; and American Institute of Certified Public Accountants Statement of Position 95-1, Accounting for Certain Insurance Activities of Mutual Life Insurance Enterprises. SFFAS No. 5, paragraph 121, requires that all components of the liability for future policy benefits (i.e., the net-level premium reserve for death and endowment policies and the liability for terminal dividends) should be separately disclosed in a footnote with a description of each amount and an explanation of its projected use and any other potential uses (e.g., reducing premiums, determining and declaring dividends available, or reducing federal support in the form of appropriations related to administrative cost or subsidies). Certain disclosed information on major commitments and contingencies in the notes to the CFS was inconsistent with disclosed information in individual agencies’ financial statements. Examples of such inconsistencies are as follows: Treasury did not disclose $114 billion in the notes to the CFS for war risk insurance. DOT provided temporary war risk insurance to U.S. air carriers whose coverage was canceled following the terrorist attacks on September 11, 2001. DOT disclosed $114 billion of war risk insurance in its notes to the financial statements, but Treasury did not disclose similar information in the notes to the CFS. Also, this information was included by DOT in the Treasury FACTS database. The risk of loss involving this type of insurance is unknown, but another terrorist attack against the United States could result in major claims. Treasury improperly disclosed $4.5 billion in unadjudicated claims for Commerce in the notes to the CFS. In its financial statements, Commerce disclosed that the exact amount of these claims against the U.S. government is unknown and the range of loss, which may exceed $4.5 billion as of September 30, 2002, cannot be estimated. Because Commerce had disclosed that it could not estimate the loss from unadjudicated claims, which was proper, Treasury should not have disclosed an amount in the notes to the CFS. Disclosing information in the CFS that is inconsistent with information in an agency’s financial statements may confuse users of the CFS or lead them to reach a wrong conclusion. Treasury did not disclose sufficient information regarding the nature of certain major commitments and contingencies in the notes to the CFS. For example, Treasury did not clearly disclose in the notes to the CFS information regarding a possible capital investment requirement of TVA. The Environmental Protection Agency (EPA) had taken judicial and administrative actions against TVA that could require TVA to invest an estimated $3 billion to purchase equipment in order to comply with the Clean Air Act and conform to EPA’s pollution control requirements. TVA is challenging this action. Treasury disclosed this $3 billion in the notes as an “administrative order against TVA” without providing the additional detail that the order represents a capital investment for compliance with the Clean Air Act and pollution control. The lack of such a detailed discussion about what the contingency represents could be misleading to readers of the CFS. The disclosure for collections and refunds of federal revenue departed from the following disclosure requirements of FASAB’s SFFAS No. 7, Concepts for Reconciling Budgetary and Financial Accounting: Paragraph 64, among other things, requires collecting entities to disclose the basis of accounting when the application of the general rule results in a modified cash basis of accounting. The CFS incorrectly states that the nonexchange revenues are reported on a modified cash basis of accounting when actually they are reported on a cash basis. Paragraph 69.2 requires collecting entities to provide in the other accompanying information any relevant estimates of the annual tax gap that become available as a result of federal government surveys or studies. The tax gap is defined as taxes or duties due from noncompliant taxpayers or importers. Amounts reported should be specifically defined (e.g., whether the tax gap includes or excludes estimates of taxes due on illegally earned revenue). Appropriate explanations of the limited reliability of the estimates also should be provided. Cross-references should be made to portions of the tax gap due from identified noncompliance assessments and preassessment work in process. The note disclosure for dedicated collections departed from the disclosure requirements of SFFAS No. 7, Part I, Accounting for Revenue and Other Financing Sources, paragraph 85, by not including the following: condensed information about assets and liabilities showing investments in Treasury securities, other assets, liabilities due and payable to beneficiaries, other liabilities, and fund balance; condensed information on net cost and changes to fund balance, showing revenues by type (exchange/nonexchange), program expenses, other expenses, other financing sources, and other changes in fund balance; and any revenues, other financing sources, or costs attributable to the fund under accounting standards but not legally allowable as credits or charges to the fund. The note disclosure for Indian trust funds departed from the following disclosure requirements of SFFAS No. 7, Part I, Accounting for Revenue and Other Financing Sources, paragraph 85, by not including the following: a description of each fund’s purpose, how the administrative entity accounts for and reports the fund, and its authority to use those collections; the sources of revenue or other financing for the period and an explanation of the extent to which they are inflows of resources to the government or the result of intragovernmental flows; condensed information about assets and liabilities showing investments in Treasury securities, other assets, liabilities due and payable to beneficiaries, and other liabilities; condensed information on net cost and changes to fund balance, showing revenues by type (exchange/nonexchange), program expenses, other expenses, other financing sources, and other changes in fund balance; and any revenues, other financing sources, or costs attributable to the fund under accounting standards, but not legally allowable as credits or charges to the fund. The disclosure for social insurance departed from the following requirements of SFFAS No. 17, Accounting for Social Insurance: Paragraph 31 requires the program descriptions for Hospital Insurance and Supplementary Medical Insurance and an explanation of trends revealed in Chart 11: Estimated Railroad Retirement Income (Excluding Interest) and Expenditures 2002-2076. Paragraph 24 requires a description of statutory or other material changes, and the implications thereof, affecting the Medicare and Unemployment Insurance programs after the current fiscal year. Paragraph 25 requires the significant assumptions used in making estimates and projections regarding the Black Lung and Unemployment Insurance programs. Paragraph 32(1)(b) requires the total cash inflow from all sources, less net interest on intragovernmental borrowing and lending and the total cash outflow to be shown in nominal dollars for the Hospital Insurance program. Paragraph 32(1)(a) requires the narrative to accompany the cash flow data for Unemployment Insurance. This narrative should include the identification of any year or years during the projection period when cash outflow exceeds cash inflow, without interest, on intragovernmental borrowing or lending. In addition, the presentation should include an explanation of material crossover points, if any, where cash outflow exceeds cash inflow and the possible reasons for this. Paragraphs 27(3)(h) and 27(3)(j) require the estimates of the fund balances at the respective valuation dates of the social insurance programs (except Unemployment Insurance) to be included for each of the 4 preceding years. Only 1 year is shown. Paragraph 32(4) requires individual program sensitivity analyses for projection period cash flow in present value dollars and annual cash flow in nominal dollars. The CFS includes only present value sensitivity analyses for Social Security and Hospital Insurance. Paragraph 32(4) states that, at a minimum, the summary should present Social Security, Hospital Insurance, and Supplementary Medical Insurance separately. Paragraph 27(4)(a) requires the individual program sensitivity analyses for Social Security and Hospital Insurance to include an analysis of assumptions regarding net immigration. Paragraph 27(4)(a) requires the individual program sensitivity analysis for Hospital Insurance to include an analysis of death rates. The actuarial present value information for the Railroad Retirement Board should not include financial interchange income (intragovernmental income from Social Security). The information included in Stewardship Information for nonfederal physical property departed from the following disclosure requirements of SFFAS No. 8, Supplementary Stewardship Reporting, paragraph 87: The annual investment, including a description of federally owned physical property transferred to state and local governments, must be disclosed. This information should be provided for the year ended on the balance sheet date as well as for each of the 4 preceding years. If data for additional years would provide a better indication of investment, reporting of the additional years’ data is encouraged. Reporting should be at a meaningful category or level. A description of major programs involving federal investments in nonfederal physical property, including a description of programs or policies under which noncash assets are transferred to state and local governments, is to be provided. The information in stewardship information for human capital departed from the disclosure requirements of SFFAS No. 8, Supplementary Stewardship Reporting, paragraph 94, by not including the following: a narrative description and the full cost of the investment in human capital for the year being reported on as well as the preceding 4 years (if full cost data are not available, outlay data can be reported); the full cost or outlay data for investments in human capital at a meaningful category or level (e.g., by major program, agency, or department); and a narrative description of major education and training programs considered federal investments in human capital. The information in stewardship information for research and development departed from the disclosure requirements of SFFAS No. 8, Supplementary Stewardship Reporting, paragraph 94, by not including the following: The annual investment made in the year ended on the balance sheet date as well as in each of the 4 years preceding that year must be reported. If data for additional years would provide a better indication of investment, reporting of the additional years’ data is encouraged. In those unusual instances when entities have no historical data, only current reporting year data need be reported. Reporting must be at a meaningful category or level—for example, a major program or department. A narrative description of major research and development programs is to be included. Deferred Maintenance The required supplemental information for deferred maintenance departed from the following disclosure requirements of SFFAS No. 6, Accounting for Property, Plant, and Equipment, paragraphs 83 and 84: Method of measuring deferred maintenance for each major class of PP&E should be included. If the condition assessment survey method of measuring deferred maintenance is used, the following should be presented for each major class of PP&E: (1) description of requirements or standards for acceptable operating condition, (2) any changes in the condition requirements or standards, and (3) asset condition and a range estimate of the dollar amount of maintenance needed to return the asset to its acceptable operating condition. If the total life-cycle cost method is used, the following should be presented for each major class of PP&E: (1) the original date of the maintenance forecast and an explanation for any changes to the forecast, (2) prior year balance of the cumulative deferred maintenance amount, (3) the dollar amount of maintenance that was defined by the professionals who designed, built, or managed the PP&E as required maintenance for the reporting period, (4) the dollar amount of maintenance actually performed during the period, (5) the difference between the forecast and actual maintenance, (6) any adjustments to the scheduled amounts deemed necessary by the managers of the PP&E, and (7) the ending cumulative balance for the reporting period for each major class of asset experiencing deferred maintenance. If management elects to disclose critical and noncritical amounts, the disclosure is to include management’s definition of these categories. The note disclosure for stewardship responsibilities departed from disclosure requirements of SFFAS No. 5, paragraph 106, related to the risk assumed for federal insurance and guarantee programs. Risk assumed information is important for all federal insurance and guarantee programs (except social insurance, life insurance, and loan guarantee programs) and is generally measured by the present value of unpaid expected losses net of associated premiums, based on the risk inherent in the insurance or guarantee coverage in force. Paragraph 106 states that when financial information pursuant to FASB’s standards on federal insurance and guarantee programs conducted by government corporations is incorporated in general purpose financial reports of a larger federal reporting entity, the entity should report as required supplementary information what amounts and periodic change in those amounts would be reported under the “risk assumed” approach. Treasury and OMB did not schedule a meeting or provide us with any technical comments on this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
For the past 6 years, since GAO began auditing the consolidated financial statements of the U.S. government (CFS), GAO has been unable to express an opinion on them because of material weaknesses in internal control and financial reporting. Contributing to GAO's inability to express an opinion has been the federal government's lack of adequate systems, controls, and procedures to properly prepare its consolidated financial statements. The purpose of this report is to discuss in greater detail weaknesses in financial reporting procedures and internal control over the process for preparing the CFS that GAO identified and to recommend improvements to address those weaknesses. GAO found deficiencies in the compilation and reporting process in the following areas: (1) controls over the compilation process, (2) unreconciled transactions affecting the change in net position, (3) reconciliation of intragovernmental activity and balances, (4) elimination of intragovernmental activity and balances, (5) reconciliation of net operating costs and unified budget surplus (or deficit), (6) statements of changes in cash balance from unified budget and other activities, (7) defining and documenting of the reporting entity, and (8) conformity with U.S. generally accepted accounting principles. Another key deficiency in the compilation and reporting process for the CFS was the failure of the Department of the Treasury's process for compiling the CFS to directly link information from federal agencies' audited financial statements to amounts reported in the CFS. Without this direct link, the information in the CFS may not be reliable. The lack of a direct link also affects the efficiency and effectiveness of the CFS audit. Treasury is designing a new compilation process that it expects to directly link this information beginning with the fiscal year 2004 CFS. GAO identified three additional areas related to the compilation and reporting process for the CFS that warrant the attention of Treasury and the Office of Management and Budget (OMB): (1) management representation letters, (2) legal representation letters, and (3) information on treaties and other international agreements.
As the principal component of the NAS, FAA’s ATC system must operate continuously—24 hours a day, 365 days a year. Under federal law, FAA has primary responsibility for operating a common ATC system—a vast network of radars; automated data processing, navigation, and communications equipment; and traffic control facilities. FAA meets this responsibility by providing such services as controlling takeoffs and landings and managing the flow of air traffic between airports. Users of FAA’s services include the military, other government users, private pilots, and commercial aircraft operators. Projects in FAA’s modernization program are primarily organized around seven functional areas—automation, communications, facilities, navigation and landing, surveillance, weather, and mission support. Over the past 16 years, FAA’s modernization projects have experienced substantial cost overruns, lengthy schedule delays, and significant performance shortfalls. To illustrate, the centerpiece of that modernization program—the Advanced Automation System (AAS)—was restructured in 1994 after estimated costs to develop the system tripled from $2.5 billion to $7.6 billion and delays in putting significantly less-than-promised system capabilities into operation were expected to run 8 years or more over original estimates. The Congress has appropriated over $25 billion for ATC modernization between fiscal years 1982 and 1998. FAA estimates that it plans to spend an additional $11 billion through fiscal year 2003 on projects in the modernization program. Of the over $25 billion appropriated to date, FAA has reported spending about $5.3 billion on 81 completed projects and $15.7 billion on about 130 ongoing projects. Of the remaining funds, FAA has reported spending about $2.8 billion on projects that have been cancelled or restructured and $1.6 billion for personnel-related expenses associated with systems acquisition. (See app. I for a list of completed projects.) FAA has fielded some equipment, most recently a new voice communications system. However, delays in other projects have caused the agency to implement costly interim projects. Furthermore, the agency is still having difficulties in acquiring new systems within agreed-to schedule and cost parameters. FAA has been fielding new ATC systems. For example, in February 1997, FAA commissioned the last of 21 Voice Switching and Control System (VSCS) units. As one of the original projects in the 1983 modernization plan, the VSCS project encountered many difficulties during its early years. Since the project was restructured in 1992, FAA has been successful in completing the first phase of the project—installing the equipment into existing en route controller workstations. The second phase is now underway—making VSCS interface with the new display replacement equipment that is being installed in the en route centers. During the past year, FAA has commissioned 183 additional systems or units of systems. For example, FAA commissioned an additional 97 units for its Automated Surface Observing System, which brings the total of commissioned units to 230 out of 597 that are planned. (See app. II for details on the implementation status of 17 major ongoing modernization projects and app. III for data on changes in their cost and schedules.) Problems with modernization projects have caused delays in replacing FAA’s aging equipment, especially the automation equipment in the en route and terminal facilities. We found that FAA has added four interim projects—three for the TRACONs and one for the en route centers—reported to cost about $655 million—to sustain and enhance current automated air traffic control equipment. FAA began its first program for the TRACONs in 1987 and expects to complete its third program in 2000. In general, these programs provide new displays and software and upgrade hardware and data-processing equipment to allow TRACONs to handle increased traffic. One program for the en route centers—the Display Complex Channel Rehost—was completed in 1997. Under this program, FAA transferred existing software from obsolete display channel computers to new more reliable and maintainable computers at five en route centers. The cost for interim projects could go even higher if FAA decides to implement an interim solution to overcome hardware problems and resolve year 2000 date requirements with the Host computer system. FAA is assessing the Host computer’s microcode—low-level machine instructions used to service the main computer—with a plan to resolve any identified year 2000 date issues, while at the same time preparing to purchase and implement new hardware—Interim Host—for each of its 20 en route centers before January 1, 2000. FAA expects to incur costs of about $160 million during fiscal years 1998 and 1999 for the Interim Host. Two key components of the modernization effort—the Wide Area Augmentation System (WAAS) and the Standard Terminal Automation Replacement System (STARS)—have encountered delays and cost increases. In September 1997, FAA estimated total life cycle costs for WAAS at $2.4 billion ($900 million for facilities and equipment and $1.5 billion for operations). In January 1998, the estimate had increased by $600 million to $3 billion ($1 billion for facilities and equipment and $2 billion for operations). The increased costs for facilities and equipment are attributable to FAA’s including previously overlooked costs for periodically updating WAAS’ equipment. The revised cost estimate for operations and maintenance is largely attributable to higher than expected costs to lease geostationary satellites. In developing WAAS, FAA has also encountered delays. When signing the original development contract with Wilcox Electric in August 1995, FAA planned for the initial system to be operational by December 1997. Because of concerns about the contractor’s performance, however, FAA terminated the original contract and signed a development contract with Raytheon (formerly Hughes Aircraft) in October 1996 that called for the initial system to be operational by April 1999. The 16-month schedule slippage was caused by problems with the original contractor’s performance, design changes, and increased software development. Last year, we reported that the implementation of STARS—particularly at the three facilities targeted for operating the system before fiscal year 2000—will likely be delayed if FAA and its contractor experience any difficulties in developing the software. These difficulties have materialized. In January 1998, FAA reported that more delays are likely because software requirements could increase to resolve air traffic controllers’ dissatisfaction with the system’s computer-human interface. FAA also reported an unexpected cost increase of $35 million for STARS during fiscal year 1998. It attributed the increase to such factors as adding resources to maintain the program’s schedule and the effects of any design changes to address new computer-human interface concerns. Also, the estimated size of software development—measured in source lines of code—is now 50 percent larger than the original November 1996 estimate. FAA has requested a reprogramming of fiscal year 1998 funds to address this cost increase. Our reviews have identified some of the root causes of long-standing problems with FAA’s modernization and have recommended solutions to them. Among the causes of these problems were the lack of a complete and enforced systems architecture, unreliable cost information, lack of mature software acquisition processes, and an organizational culture that did not always act in the agency’s long-term best interest. While FAA has begun to implement many of our recommendations, it will need to stay focused on continued improvement. FAA has proceeded to modernize its many ATC systems without the benefits of a complete systems architecture, or overall blueprint, to guide their development and evolution. In February 1997, we reported that FAA has been doing a good job of defining one piece of its architecture—the logical architecture. That architecture describes FAA’s concept of operations, business functions, high-level descriptions of information systems and their interrelationships, and information flows among systems. This high-level architecture will guide the modernization of FAA’s ATC systems over the next 20 years. We identified shortcomings in two main areas. FAA’s system modernization lacked a technical architecture and an effective enforcement mechanism. FAA generally agreed with the recommendation in our February 1997 report to develop a technical architecture and has begun the task. We will continue to monitor FAA’s efforts. Also, to be effective, the architecture must be enforced consistently. FAA has no organizational entity responsible for enforcing architectural consistency. Until FAA defines and enforces a complete ATC systems architecture, the agency cannot ensure compatibility among its existing and future programs. We also recommended in the February 1997 report that FAA develop a management structure for enforcing the architecture that is similar to the provisions of the Clinger-Cohen Act of 1996 for department-level Chief Information Officers (CIO). FAA disagrees with this recommendation because it believes that the current location of its CIO, within the research and acquisition line of business, is effective. We continue to believe that such a structure is necessary. FAA’s CIO does not report directly to the Administrator and does not have organizational or budgetary authority over those who develop ATC systems or the units that operate and maintain them. Furthermore, the agency’s long history of problems in managing information technology projects reflects weaknesses in its current structure. In January 1997, we reported that FAA lacks reliable cost-estimating processes and cost-accounting practices needed to effectively manage investments in information technology, which leaves it at risk of making ill-informed decisions on critical multimillion, even billion, dollar air traffic control systems. Without reliable cost information, the likelihood of poor investment decisions is increased, not only when a project is initiated, but also throughout its life cycle. We recommended that FAA improve its cost-estimating processes and fully implement a cost-accounting system. Our recent review of the reliability of FAA’s reported financial information and the possible program and budgetary effects of reported financial statement deficiencies again highlights the need for reliable cost information. The audit of FAA’s 1996 financial statement disclosed many problems in reporting of operating materials and supplies and property and equipment. Many of these problems resulted from the lack of a reliable system for accumulating project cost accounting information. Although FAA has begun to institutionalize defined cost-estimating processes and to acquire a cost-accounting system, it will be awhile before FAA and other decisionmakers have accurate information to determine and control costs. In March 1997, we reported that FAA’s processes for acquiring software—the most costly and complex component of ATC systems—are ad hoc, sometimes chaotic, and not repeatable across projects. As a result, FAA is at great risk of acquiring software that does not perform as intended and is not delivered on time and within budget. Furthermore, FAA lacks an effective approach for improving its processes for acquiring software. In the March 1997 report, we recommended that FAA improve its software acquisition capabilities by institutionalizing mature acquisition processes and reiterated our prior recommendation that FAA establish a management structure similar to the department-level CIOs to instill process discipline. FAA concurred with part of our recommendation and has initiated efforts to improve its software acquisition processes. These efforts, however, are not comprehensive, are not complete, and have not yet been implemented agencywide. Furthermore, FAA disagrees with our recommendation related to its management structure. Without establishing strong software acquisition processes and an effective management structure, FAA risks making the same mistakes it did on failed systems acquisition projects. In August 1996, we reported that an underlying cause of FAA’s ATC acquisition problems is its organizational culture—the beliefs, the values, and the attitudes and expectations shared by an organization’s members that affect their behavior and the behavior of the organization as a whole.We found that FAA’s acquisitions were impaired when employees acted in ways that did not reflect a strong commitment to mission focus, accountability, coordination, and adaptability. We recommended that FAA develop a comprehensive strategy for cultural change that (1) addresses specific responsibilities and performance measures for all stakeholders throughout FAA and (2) provides the incentives needed to promote the desired behaviors and achieve agencywide cultural change. In response to our recommendations, FAA issued a report outlining its overall strategy for changing its acquisition culture and describing its ongoing actions to influence organizational culture and improve its life cycle acquisition management processes. For example, the Acquisition and Research (ARA) organization has proposed restructuring its personnel system to tie pay to performance based on 15 measurable goals, each with its own performance plan. ARA’s proposed personnel system is under consideration by the Administrator. In our August 1996 report, we also noted that the Integrated Product Development System, based on integrated teams, was a major FAA initiative to address the shortcomings with its organizational culture. According to an ARA program official, FAA has 15 integrated product teams, the majority of which have approved plans. The official indicated that all team members have received training to prepare them for their roles and that ARA is developing a set of standards to measure the performance of the integrated teams. However, the official also acknowledged that FAA has had difficulty in gaining commitment to the integrated team concept throughout the agency because offices outside of ARA have been resistant to integrated teams. To help overcome institutional cultural barriers, FAA and external stakeholders have been discussing the establishment of a special program office responsible for the acquisition of free flight systems. Although, the details of how such an office would operate have not been put forward, one option would be for this office to have its own budget and the authority to make certifications and regulations and to determine system requirements. Such an office could be viewed as the evolutionary successor to the integrated product team system. Another approach being considered by FAA is the establishment of a single NAS manager at the level of associate administrator to eliminate traditional “stovepipes” between the acquisition and air traffic organizations. As FAA considers recommendations to create a new structure, we believe that it would be advantageous for FAA to implement our recommendation to create a management structure similar to the department-level CIO as called for in the Clinger-Cohen Act. Having an effective CIO, with the organizational and budgetary authority to implement and enforce a complete, agencywide systems architecture would go a long way towards eliminating traditional “stovepipes” between integrated product teams, as well as between the acquisition and air traffic organizations. Furthermore, the agency could gain valuable insight from the experiences of other organizations that have implemented similar structures. Regardless of future direction, FAA recognizes that considerable work is needed to modify behaviors and create comprehensive cultural change. A continued focus on cultural change initiatives will be critical in the years ahead. While FAA is involving external and internal stakeholders in revising its approach to the modernization program, it will need to stay focused on implementing solutions to the root causes of past problems, ensure that all aspects of its acquisition management system are effectively implemented, and quickly address the looming crisis with the year 2000 date requirements. The FAA Administrator has begun an outreach effort with the aviation community to build consensus on and seek commitment to the future direction of the agency’s modernization program. Similar to our findings on the logical architecture, a review of this program by the NAS Modernization Task Force concluded that the architecture under development builds on the concept of operations for the NAS and identifies the programs needed to meet the needs of the user community. However, the task force found that the architecture is not realistic because of (1) an insufficient budget; (2) the preponderance of risks associated primarily with certifying and deploying new equipment and with users’ cost to acquire equipment; and (3) unresolved institutional issues and a lack of user commitment. The task force recommended a revised approach that would be less costly and would be focused more on providing near-term user benefits. Under this revised approach, FAA would (1) implement a set of core technologies to provide immediate user benefits; (2) modify the Flight 2000 initiative to address critical risk areas associated with key communications, navigation, and surveillance programs; and (3) proceed with implementing critical time-driven activities related to the Host computer and the year 2000 problems and with implementing such systems as STARS, surveillance radars, and en route displays to replace aging infrastructure. The details on how FAA intends to implement the task force’s recommendations are not yet known. However, from our discussions with task force officials, their practical effect would be that the development and the deployment of some current programs would be accelerated while others would be slowed down. Meanwhile, FAA would continue developing programs like STARS and the Display System Replacement and work to ensure that its computers recognize the year 2000. For example, under the revised approach, the WAAS program would be slowed down after Phase I, which is scheduled to provide initial satellite navigation capabilities by 1999, to enable FAA to resolve technical issues and explore how costs could be reduced. Further development would be subject to review and risk mitigation under the expanded Flight 2000 initiative. FAA faces both opportunities and challenges as it revises the modernization program. On the one hand, FAA has an opportunity to regain user confidence by delivering systems that benefit them. On the other hand, FAA is challenged to follow through with its investment management process improvements. We urge FAA to proceed cautiously as it attempts to expedite the deployment of key technologies to avoid repeating past practices, such as undue concern for schedules at the expense of disciplined systems development and careful, thorough testing. FAA will need to resist this temptation, as the results are typically systems that cost more than expected, are of low quality, and are late as well. Concerned that burdensome procurement rules were a primary contributor to FAA’s acquisition problems, the Congress exempted FAA from many procurement rules. In response, the agency implemented its Acquisition Management System (AMS) on April 1, 1996, to improve its acquisition of new technology. AMS is intended to provide high-level acquisition policy and guidance and to establish rigorous investment management practices. We are currently reviewing FAA’s investment management approach, including its practices and processes for selecting, controlling, and evaluating projects, and expect to report later this year. As FAA continues to implement AMS and embarks on a revised modernization approach, it will need to establish baselines for individual projects and performance measurements to track key goals. Under AMS, an acquisition project should have a baseline, which establishes the performance, life-cycle cost, schedule, and benefit boundaries within which the program is authorized to operate. Having an effective investment analysis capability is important in developing these baselines. In its May 1997 report on AMS, FAA noted that it has focused more attention on investment management analyses. The agency reported that it has established several investment analysis teams of individuals with expertise in such areas as cost estimating, market analysis, and risk assessment to help prepare program baselines to use in determining the best way to satisfy mission needs. Although FAA has begun efforts to establish new baselines for projects that were underway prior to AMS, program evaluation officials question the availability and the quality of operations and maintenance data that are being used to estimate life-cycle project costs. FAA’s history of unplanned cost increases, most recently seen with its STARS and WAAS programs, coupled with past deficiencies in cost estimating processes and practices point to the need to use reliable and complete data to establish realistic baselines. As for performance measurements, FAA does not have a unified effort underway to effectively measure progress toward achieving acquisition goals. FAA has established a goal to reduce the time to field systems by 50 percent and to reduce the cost of acquisitions by 20 percent during the first 3 years under AMS. FAA also plans to measure performance in such other critical areas as customer satisfaction and the quality of products and services. According to FAA’s evaluation, while individual organizations are attempting to measure progress in meeting the two goals, a coordinated agencywide measurement effort is lacking. FAA’s failure to field systems on time and within cost indicates the need for a comprehensive system of performance measurements that can help provide systematic feedback about accomplishments and progress in meeting mission objectives. The need for such measurements will become even more critical as FAA expedites the deployment of some projects. Clearly identified performance measurements will help FAA, the Congress, and system users assess how well the agency achieves its goals. On January 1, 2000, computer systems worldwide could malfunction or produce inaccurate information simply because the century has changed. Unless corrected, such failures could have a costly, widespread impact. The problem is rooted in how dates are recorded and computed. For the past several decades, systems have typically used two digits to represent the year, such as “97” for 1997, to save electronic storage space and reduce operating costs. This practice, however, makes 2000 indistinguishable from 1900, and the ambiguity could cause systems to malfunction in unforeseen ways or to fail completely. FAA’s challenge is great. Correcting this problem will be difficult and expensive, and must be done while such systems continue to operate. In less than 2 years, hundreds of computer systems that are critical to FAA’s operations, such as monitoring and controlling air traffic, could fail to perform as needed unless proper date-related calculations can be made. FAA’s progress in making its systems ready for the year 2000 has been too slow. We have reported that, at its current pace, it will not make it in time. The agency has been severely behind schedule in completing basic awareness and assessment activities—critical first and second phases in an effective year 2000 program. For example, just this month FAA appointed a program manager who reports to the Administrator. Delays in completing the first two phases have left FAA little time for critical renovation, validation, and implementation activities—the final three phases in an effective year 2000 program. With less than 2 years left, FAA is quickly running out of time, making contingency planning for continuity of operations even more critical. If critical FAA systems are not year 2000 compliant and ready for reliable operation on January 1 of that year, the agency’s capability in several areas—including the monitoring and controlling of air traffic—could be severely compromised. The potential serious consequences could include degraded safety, grounded or delayed flights, increased airline costs, and customer inconvenience. We have made a number of recommendations aimed at expediting the completion of overdue awareness and assessment activities. Mr. Chairman, this concludes my statement. We will be happy to answer any questions from you or any Member of the Subcommittee. Automated Radar Terminal System (ARTS) IIIA Assembler (22-02) Additional ARTS IIIA at FAA Technical Center (22-05) Consolidated Notice to Airmen System (23-03) Visual Flight Rules Air Traffic Control Tower Closures (22-14) Altitude Reporting Mode of Secondary Radar (Mode-C) (21-10) Enhanced Target Generator Displays (ARTS III) (22-03) National Airspace Data Interchange Network IA (25-06) Hazardous In Flight Weather Advisory Service (23-08) (continued) En Route Automated Radar Tracking System Enhancements (21-04) Sustain New York Terminal Radar Approach Control (TRACON) (22-18) National Radio Communication System (26-14) Direct Access Radar Channel System (21-03) National Airspace Data Interchange Network II (25-07) Modernization of Unmanned FAA Buildings and Equipment (26-08) Large Airport Cable Loop Systems (26-05) Interfacility Data Transfer System for Edwards Air Force Base Radar Approach Control (35-20) Acquisition of Flight Service Facilities (26-10) (continued) Radar Pedestal Vibration Analysis (44-43) Low-Level Wind Shear Alert System (23-12) Brite Radar Indicator Tower Equipment (22-16) National Implementation of the “Imaging” Aid for Dependent Converging Runway Approaches (62-24) Integrated Communications Switching System (23-13) System Engineering and Integration Contract (26-13) National Airspace Data Interchange Network II Continuation (35-07) Instrument Landing System and Visual Navaids Engineering and Sparing (44-24) Oceanic Display and Planning System (21-05) Integrated Communications Switching System Logistics Support (43-14) Replacement of Controllers Chairs (42-24) ARTS IIIA-Expand 1 Capacity and Provide Mode C Intruder Capability (32-20) (continued) Civil Aviation Registry Modernization (56-24) Precision Automated Tracking System (56-16) National Airspace Integrated Logistic Support (56-58) Long Range Radar Radome Replacement (44-42) Installed at en route centers to allow processing of existing air traffic control software on new equipment. Project comprised a variety of tower and terminal replacement and modernization projects. Project was continued in the Capital Investment Plan under projects 42-13 and 42-14. Also known as the Radio Communications Link project, it was designed to convert aging “special purpose” Radar Microwave Link System into a “general purpose” system for data, voice, and radar communications among en route centers and other major FAA facilities. Project was activated to sustain and upgrade air traffic control operations and acquire eight terminal radars awaiting the full implementation of the Advanced Automation System. Project comprised a variety of diverse support projects and has been continued in the Capital Investment Plan under Continued General Support (46-16). Over the past decade, we have reported on FAA’s progress in meeting schedule commitments for last-site implementation, which signals completion of the project. Prior to this year, we have used the dates from the 1983 NAS modernization plan. This year, after discussions with FAA officials, we are measuring FAA’s progress against an interim date—which in most cases represents the date of contract award or investment decision. We will continue to show the original date, but will only measure progress against the interim date. 57 TDLS I57 TDLS IIStage 0: 21 Stage 1 and 2: 21 0 TDLS is the Tower Data Link Services. TDLS I (Predeparture Clearance/Flight Data Input/Output CRT/Rank Emulation) has been commissioned at all 57 sites; TDLS II (Digital-Automatic Terminal Information Service) has been installed at all 57 sites and commissioned at 48 sites. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the Federal Aviation Administration's (FAA) program to modernize its National Airspace System (NAS), focusing on: (1) the status of key modernization projects; (2) FAA's actions to implement recommendations to correct modernization problems; and (3) the opportunities and challenges facing FAA as it embarks upon its new modernization approach. GAO noted that: (1) since 1982, Congress has appropriated over $25 billion to the modernization program; (2) while FAA has fielded some equipment, historically, the agency has experienced considerable difficulty in delivering systems with promised cost and schedule parameters; (3) as a result, FAA has been forced to implement costly interim projects; (4) meanwhile, two key systems--the Wide Area Augmentation System and the Standard Terminal Automation Replacement System--have encountered cost increases and schedule delays; (5) GAO's work has pinpointed the root causes of FAA's modernization problems and has recommended actions to overcome them; (6) most recently, GAO found shortcomings in the areas of systems architecture or the overall modernization blueprint, cost estimating and accounting, software acquisition, and organizational culture; (7) although FAA has begun to implement many of GAO's recommendations, sustained management attention is required to improve the management of the modernization program; (8) FAA is collaborating with and seeking commitment from users in developing a new approach to make the modernization less costly and to provide earlier user benefits; (9) the challenge for FAA is to have disciplined processes in place in order to deliver projects as promised; and (10) FAA will also need to quickly address the looming year 2000 computer crisis to ensure that critical air traffic control systems do not malfunction or produce inaccurate information simply because the date has changed.
Following the terrorist attacks of September 11, 2001, the United States began military operations to combat terrorism both in the United States and overseas. Operations to defend the United States from terrorist attacks are known as Operation Noble Eagle. Overseas operations to combat terrorism are known as Operation Enduring Freedom, which takes place principally in Afghanistan, and Operation Iraqi Freedom, which takes place in and around Iraq. Figure 1 shows the primary locations where U.S. forces conducted operations in support of the war in fiscal year 2003. To support the war in fiscal year 2003, Congress appropriated $68.7 billion to DOD: $6.1 billion in the Consolidated Appropriations Resolution, 2003, and $62.6 billion in the Emergency Wartime Supplemental Appropriations Act, 2003. While most of these funds were only available for expenditure in fiscal year 2003, some could be expended in subsequent fiscal years. Of the $68.7 billion appropriated for GWOT, almost $16 billion was appropriated in the fiscal year 2003 Wartime Supplemental to a transfer account called the Iraqi Freedom Fund. The Iraqi Freedom Fund is a special account providing funds for additional expenses for ongoing military operations in Iraq, and those operations authorized by P.L. 107-40 (Sept. 13, 2001), Authorization for Use of Military Force, and other operations and related activities in support of the global war on terrorism. Congress has also appropriated funds for the reconstruction of Iraq and Department of State and U.S. Agency for International Development projects. We are reviewing the contracts involved in the reconstruction, as well as the funding for other projects and will be issuing separate reports on these issues. As of September 30, 2003, DOD reported obligating a total of over $61 billion in fiscal year 2003 in support of the war. Among the operations that comprised the war on terrorism, Operation Iraqi Freedom amounted to about $39 billion or 64 percent of the total obligations, as shown in figure 2. The obligations reported for Iraqi Freedom are probably understated and the obligations reported for Operation Enduring Freedom overstated because, according to DOD officials, the initial obligations associated with the build up to Iraqi Freedom were charged to Enduring Freedom. Officials in the Office of the Under Secretary of Defense (Comptroller) reclassified reported obligations to the appropriate operation after Iraqi Freedom began, based on anticipated and projected GWOT operations. Of the overall reported amount obligated within DOD for GWOT during fiscal year 2003, the Army reported the largest amount of obligations, 46 percent of the total, as shown in figure 3. (The Army had the largest number of military personnel engaged in the war.) In addition to the obligations reported by the other military services, about 13 percent of DOD’s GWOT obligations were reported by a total of 15 other DOD organizations, such as the Defense Information Systems Agency and the Defense Logistics Agency. Of these DOD organizations, the Defense Logistics Agency reported the largest amount of obligations—over $3.6 billion. The obligations reported for GWOT fall into three categories—operation and maintenance, military personnel, and investment. Operation and maintenance account funds obligated in support of the war are used for a variety of purposes, including transportation of personnel, goods, and equipment; unit operating support costs; and intelligence, communications, and logistics support. Military personnel funds obligated in support of the war cover the pay and allowances of mobilized reservists as well as special payments or allowances for all qualifying military personnel both active and reserve, such as Imminent Danger Pay and Family Separation Allowance. Investment funds obligated for the war are used for procurement, military construction, and research, development, test and evaluation. As shown in figure 4, GWOT obligations reported in the operation and maintenance account amount to almost $44 billion or 71 percent of the total. The Consolidated Department of Defense Terrorist Response Cost Report displays obligations in all accounts by specific categories. As previously cited, chapter 23 of the DOD Financial Management Regulations, which governs how all DOD organizations report financial data for contingency operations, defines these categories. Within the operation and maintenance account, the operating support category had the largest amount of reported obligations for fiscal year 2003—over $32 billion or 74 percent of the total. This category, which includes obligations incurred for such things as training, operational support, equipment maintenance, and troop support, had the highest level of obligations, in part reflecting the cost of using civilian contractors to provide housing, food, water, and other services to over 180,000 troops deployed overseas in support of GWOT. A large part of the operating support costs category—48 percent— is in two miscellaneous categories, other supplies and equipment ($7 billion) and other services and miscellaneous contracts ($8.5 billion). Most of the remaining reported GWOT obligations, $15.6 billion or 26 percent, were in the military personnel accounts. Within the military personnel account, the category reserve component called to active duty had the highest level of reported obligations—almost $9.3 billion or 59 percent of the total. This category captures the obligations reported for the salaries paid to reservists mobilized for active duty. According to service officials, more reservists were called to active duty than originally estimated and remained on active duty longer than planned. As with operation and maintenance obligations, there was also a large miscellaneous category, other military personnel, which accounted for about $3.8 billion, or 24 percent, of all military personnel obligations. In discussing the results of our analysis with the Office of the Under Secretary of Defense (Comptroller) and the military services, there was recognition of the large amount of obligations captured in miscellaneous categories. The Office of the Under Secretary of Defense (Comptroller) is considering how best to provide more specific detail in future cost reports. The adequacy of funding available for fiscal year 2003 GWOT obligations reported in military personnel and operation and maintenance accounts varied by service. The funding available for the war consists of funds directly appropriated to the military services for GWOT, the net transfer of funds from the Iraqi Freedom Fund, and reprogrammed funds originally appropriated to the services for peacetime operations. Within the military personnel accounts, as shown in table 1, in fiscal year 2003 the Army, Navy, and Air Force reported more obligations in support of the war than they received in funding for the war. To cover the shortfall in GWOT funding, these services had to use funds appropriated for their budgeted peacetime operations. Officials from each of these services explained that the shortfall was a relatively small portion of their budgeted peacetime military personnel account. For example, the Army’s reported shortfall of $155.2 million represents less than 1 percent of its total peacetime appropriation. The Marine Corps, which had augmented its GWOT military personnel appropriation with funds from its peacetime military personnel account, ended the fiscal year with slightly less in obligations than it had in available funding—$1.8 million or less than 1 percent of its peacetime appropriation. Within the operation and maintenance accounts, as shown in table 2, in fiscal year 2003 the Army, Air Force, and Navy received funding that exceeded their reported GWOT obligations. At the same time the Marine Corps reported more GWOT obligations than it received in funding. In discussing our analysis of the difference between GWOT obligations and funding with the Army, Air Force, and Navy, we were told the following. The Army reported slightly more funding than obligations for the war. At the end of fiscal year 2003, the Army reported obligations that initially appeared to be more than $500 million less than the available funding. However, as of January 2004, the Army has subsequently updated its fiscal year 2003 reporting to reflect about $470 million in additional reported obligations. According to Army officials, the Army had not included in the September 30, 2003, consolidated cost report $494 million in obligations reported to support the Coalition Provisional Authority in Iraq. The Army received GWOT funding in fiscal year 2003 to support this organization, but the obligations were not captured in the Army’s accounting system used to record most other Army obligations. The Army also cancelled some obligations made before the end of the fiscal year, resulting in a total adjustment to the fiscal year 2003 cost report of $470 million. Thus the Army ended the year with about $30 million more in funding than reported obligations. Air Force officials told us that the $176.6 million, which appeared to be unobligated GWOT funding, was actually obligated late in the fiscal year. According to the officials, that amount was obligated for flying operations requirements that the Air Force decided were related to the war, but were not reported as such. Navy officials told us that the apparent unobligated GWOT funds ($299 million) were in fact obligated in support of the war but were originally, and incorrectly, reported as obligations in support of budgeted peacetime operations. These officials said that they would be updating their reporting for obligations incurred in fiscal year 2003 to reflect an additional $299 million in operation and maintenance obligations for the war. At the same time, the Navy returned $198 million to the Iraqi Freedom Fund that it believed was in excess of its operation and maintenance requirements for the war. The available funding in table 2 was adjusted to reflect the return of the $198 million. Returning these funds is in keeping with recommendations we made in our September 2003 report discussed above to monitor the obligation of funds in the services’ operation and maintenance accounts and ensure that all funds transferred to the services that are not likely to be obligated by the end of the fiscal year are transferred back to the Iraqi Freedom Fund. In subsequent work we plan to review GWOT obligations to detail the specific purposes for which funds were used and to determine whether the service requirements for which funding was obligated were war-related. The additional Air Force flying operations’ requirements and the funds the Navy recharacterized as being in support of the war will be included in that review. While the Marine Corps obligated $72.5 million more for GWOT than it had in funds at the end of fiscal year 2003, it, like the Navy, returned money to the Iraqi Freedom Fund. At the end of fiscal year 2003, Marine Corps officials believed that they could not obligate $152.2 million that had been transferred to the Marine Corps’ operation and maintenance account from the Iraqi Freedom Fund before the end of the fiscal year and so transferred it back to the fund. In retrospect, however, the Marines obligated more than expected. According to Marine Corps officials, this shortfall was covered by using normal peacetime operation and maintenance appropriations that units deployed in support of GWOT were not going to use. As noted with the Army and Navy analyses, the services have reported obligation updates to the Office of the Under Secretary of Defense (Comptroller) for inclusion in the Defense Finance and Accounting Service’s Consolidated DOD Terrorist Response Cost report for fiscal year 2003. The Defense Finance and Accounting Service is issuing monthly fiscal year 2003 update reports as the obligation data is updated, which must be added to the report as of September 30, 2003, to determine the total fiscal year 2003 obligations reported in support of GWOT. In official oral comments on a draft of this report, officials from DOD’s Office of the Under Secretary of Defense (Comptroller) stated that the department had no objections to the report. DOD also provided technical comments and we have incorporated them as appropriate. We are sending copies of this report to the Chairmen and Ranking Minority Members of the House and Senate Budget Committees, the Secretary of Defense, the Secretaries of the military services, and the Director, Office of Management and Budget. We will also make copies available to others on request. In addition, the report will available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions, please contact me on (757) 552-8100 or by e-mail at [email protected]. Major contributors to this report were Steve Sternlieb, Ann Borseth, Madelon Savaides, Leo Sullivan, and John Buehler.
The Global War on Terrorism--principally involving operations in Afghanistan and Iraq--was funded in fiscal year 2003 by Congress's appropriation of almost $69 billion. To assist Congress in its oversight of spending, GAO is undertaking a series of reviews relating to contingency operations in support of the Global War on Terrorism. In September 2003, GAO issued a report that discussed fiscal year 2003 obligations and funding for the war through June 2003. This report continues the review of fiscal year 2003 by analyzing obligations reported in support of the Global War on Terrorism and reviews whether the amount of funding received by the military services was adequate to cover DOD's obligations for the war from October 1, 2002, through September 30, 2003. GAO will also review the war's reported obligations and funding for fiscal year 2004. In fiscal year 2003, DOD reported obligations of over $61 billion in support of the Global War on Terrorism. GAO's analysis of the obligation data showed that 64 percent of fiscal year 2003 obligations reported for the war on terrorism went for Operation Iraqi Freedom; among the DOD components, the Army had the most obligations (46 percent); and among appropriation accounts the operation and maintenance account had the highest level of reported obligations (71 percent). The adequacy of funding available for the Global War on Terrorism for fiscal year 2003 military personnel and operation and maintenance accounts varied by service. For military personnel, the Army, Navy, and Air Force ended the fiscal year with more reported obligations for the war than funding and had to cover the shortfalls with money appropriated for their budgeted peacetime personnel costs. For operation and maintenance accounts, the Army, Navy, and Air Force appeared to have more funding than reported obligations for the war. However, the Navy and Air Force have stated that the seeming excess funding ($299 million and $176.6 million respectively) were in support of the war on terrorism, but had not been recorded as such. Therefore, Navy and Air Force obligations exactly match funding. The Marine Corps used funds appropriated for its budgeted peacetime operation and maintenance activities to cover shortfalls in funding for the war.
The mortgage assignment program was created in 1959 by section 230 of the National Housing Act. However, HUD only began operating the program in 1976 in settlement of a lawsuit. The program, intended to help mortgagors who have defaulted on HUD-insured loans to avoid foreclosure and retain their homes, provides mortgagors with financial relief by reducing or suspending their mortgage payments for up to 36 months until they can resume making regular payments. To enter the program, a mortgagor must apply and meet certain criteria, including that the default must have been caused by circumstances beyond the mortgagor’s control, such as the loss of employment or serious illness. However, after the 36-month period, a mortgagor’s delinquencies are not required to be eliminated or reduced by a specified time other than over the remaining term of the loan, which HUD can extend for up to 10 years. Most of the mortgages assigned under the program are insured by FHA under its Mutual Mortgage Insurance Fund (Fund). For these mortgages, the cost of the assignment program is financed by the Fund, which insures private lenders against losses on mortgages that finance purchases of one to four housing units. To cover losses, FHA deposits borrowers’ insurance premiums in the Fund. Historically, the Fund has been financially self-sufficient. However, if it were to become exhausted, the U.S. Treasury would have to directly cover lenders’ claims and administrative costs. We based our analysis of whether the assignment program helps borrowers avoid foreclosure and reduces FHA’s foreclosure losses primarily on data from two of HUD’s national information systems—the Single-Family Mortgage Notes Servicing System and the Single Family Insurance System—as of September 30, 1994. We used these data to analyze foreclosures and delinquencies and forecast the foreclosure rates of the 68,695 mortgages assigned since fiscal year 1989. We also built a cash flow model and prepared analysis to estimate the financial loss to FHA’s Fund from these loans by estimating the revenue and expense flows for these loans over their life. Our data reflect nationwide mortgage assignment statistics on single-family loans that were entered in HUD’s two national data systems as the Fund’s mortgage defaults that were assigned to avoid foreclosure—71,500 mortgage loans as of September 30, 1994. Loans assigned to HUD for other reasons were not included in our analyses. To determine how to improve the program and reduce its losses, we obtained information from four other mortgage assistance institutions that provide foreclosure relief to borrowers in default on single-family housing loans—the Department of Veterans Affairs (VA), Rural Housing and Community Development Service (RHCDS), Federal National Mortgage Association (Fannie Mae), and Federal Home Loan Mortgage Corporation (Freddie Mac). (See app. I for additional details on the scope and methodology of our work.) To improve the administration of the program, HUD recently has initiated changes to the program. These include selling its currently assigned loans; implementing Activity Tracking, an automated collection computer subsystem; studying the costs and benefits of alternatives to foreclosure; permitting lenders to provide relief to borrowers, such as suspending or reducing mortgage payments, without prior approval from HUD; implementing a “compromise offer” program under which borrowers’ loans are considered to be paid off for less than the amount owed; and implementing for a limited period of time a program for reducing interest rates on certain program loans. HUD has also proposed contracting for loan servicing. The Office of Management and Budget (OMB) considers FHA’s mortgage assignment program to be a high-risk area because controls do not protect the financial interests and resources of the government. In the President’s fiscal year 1996 budget, OMB stated that the servicing of assigned loans was expensive, inefficient, and labor-intensive. Also, OMB noted that there is little evidence that the program achieves its goal of giving homeowners a chance to keep their homes during a temporary interruption of income. According to OMB, legislative changes should be considered to reduce or eliminate the assignment of loans in the future by greater reliance on the private sector as well as legislation to reduce the program’s forbearance period from 3 years to 1 year. To reduce the number of assigned loans and the required servicing of loans, OMB recommended that HUD continue to sell its assigned loans. We forecast, on the basis of historical data on the disposition of program loans, that about 35,400 (52 percent) of the 68,695 borrowers accepted into the program since fiscal year 1989 will eventually lose their homes through foreclosure. For the remaining loans (48 percent), we forecast that borrowers will pay off the loans and avoid foreclosure by either selling their homes or refinancing their mortgages, often after remaining in the program for a lengthy period of time. Some of these borrowers who eventually pay off their loans may have, under the compromise program, paid HUD an amount less than the total amount owed. (A detailed discussion of our methodology for forecasting the program’s foreclosure rates appears in app. II.) Figure 1 shows our estimates of conditional foreclosure rates based on loans that remained active until a given year and were assigned during a 17-year period (fiscal years 1977 through 1994). We estimate that conditional foreclosure rates will increase sharply over the first 7 years after a loan is accepted into the program, peaking at about 13 percent. The program’s conditional foreclosure rates substantially exceed those experienced on FHA’s nonassigned single-family loans during the same 17-year period. HUD’s records show that since fiscal year 1977, at least 96,500 borrowers have been accepted into the assignment program. About 71,500 of these borrowers were still assigned to HUD as of September 30, 1994. A large portion of them—39,603, or 55 percent—have been in the program fewer than 3 years (see fig. 2). As shown in figure 3, of the approximately 71,500 borrowers in the program as of September 30, 1994, 59 percent were current with forbearance agreements or current with their original mortgage payments. The remaining 41 percent were delinquent or pending foreclosure. Only 5 percent of the program’s borrowers were making full mortgage payments. When borrowers remained in the program beyond the 3-year relief period and therefore were required to make full mortgage payments, the proportion of borrowers current with repayment agreements dropped and the proportion of borrowers in foreclosure increased. Similarly, the average amount of delinquencies owed by borrowers increased. (See app. III for detailed information on borrowers’ compliance with repayment agreements.) Most of the 25,041 borrowers who left the program for whom records are available did so following foreclosure, while other borrowers paid off their loans and at times eliminated delinquencies. Of the 25,041 borrowers, HUD foreclosed on 14,707 borrowers (59 percent), while 10,334 borrowers (41 percent) paid off their loans. An example of a borrower who left the program through foreclosure is a Chicago mortgagor who was accepted into the program in November 1990 and was $9,495 behind in payments at that time. The loan’s outstanding principal balance at that time was $34,862. Although HUD determined that the mortgagor’s income was sufficient for him to make more than full mortgage payments, the mortgagor made only five payments over the next 3-1/2 years. By September 1994, when HUD began foreclosure, the borrower was over $25,000 behind in payments. Borrowers who paid their loans generally did so following the sale of their homes at a price that, in most cases, allowed them to repay the outstanding mortgage and the delinquent amount. For example, a Seattle, Washington, mortgagor defaulted on an $89,890 loan 15 months after obtaining it. The mortgagor found a new job after experiencing a salary cut on his previous job. When the mortgage was assigned in November 1990, the mortgagor was already $6,333 behind in payments. Initially, the mortgagor was allowed to make reduced payments of $400 per month, about half the full payment. After 2 years, the mortgagor was unable to pay off the delinquent amount, which had grown to $19,229 when he sold the house in April 1993. However, the sale proceeds enabled the mortgagor to fully satisfy his obligation to HUD. (See app. IV for cases in which some borrowers paid off mortgages and others did not.) Given the lower income of FHA borrowers, which can make them financially vulnerable, the assignment program’s operating procedures do not provide assurance that delinquent amounts will be repaid and that borrowers will succeed in avoiding foreclosure. These procedures include (1) accepting borrowers into the program after they have accumulated substantial loan delinquencies and therefore have an uncertain repayment ability and (2) a 36-month relief period when payments can be reduced or suspended, which permits outstanding delinquencies to grow even if borrowers are current with repayment agreements. Most FHA home loans are for moderate-income individuals. These individuals are likely to be more financially vulnerable than other mortgagors who are able to obtain home loans without FHA’s assistance. Under the assignment program, a borrower must miss at least three mortgage payments before submitting an application to enter the program. During the acceptance process, additional payments may be missed, and substantial delinquencies may accumulate over a period of 6 months or more. We randomly selected, as case studies, 136 loans from four loan categories—paid-off, current with payments, foreclosed on, and delinquent—from files at four HUD field offices—Boston, Chicago, Ft. Worth, and Spokane—to illustrate, among other things, the amount of delinquencies that borrowers had accumulated when they entered the program. Our review of these loans showed that borrowers were, on average, 8 months behind in mortgage payments of $4,014 on their loans at the time they were accepted into the program. These loans had an average outstanding principal balance of $39,886 at that time. These figures, and others reported later that are based on these case studies, are not projectable to the universe of assigned loans. The program also allows 3 years of reduced or suspended mortgage payments. For borrowers who qualify for this program feature, delinquencies for unpaid interest and other expenses continue to grow. As shown in figure 4, as of the end of fiscal year 1994, all borrowers in the program for more than 1 year but fewer than 3 years experienced, on average, an increase in delinquent amounts from about $7,000 to $15,000. On average, after 9 years in the program, delinquencies for all borrowers continued to grow, peaking at about $22,000. Similarly, delinquencies for borrowers current with forbearance agreements also grew at about the same rate as those of all borrowers during the first 3 years but began to decline after the borrowers had been in the program for 3 years. Once the 36-month relief period is completed, borrowers are expected to resume full mortgage payments and, if possible, increase payments to reduce accumulated delinquent amounts. If borrowers cannot make full payments, HUD may initiate foreclosure action. There is no requirement, however, that borrowers pay off their delinquent amounts or leave the program in a specified time period, other than over the remaining term of the loan, which HUD can extend for up to 10 years. About 31,900 (45 percent) of the borrowers in the program as of September 30, 1994, had been in the program for more than 3 years. About 1,000 borrowers had been in the program for over 15 years. In assessing the cost to FHA of operating the program, we (1) forecasted the foreclosure and payoff rates for loans assigned since fiscal year 1989 and (2) estimated the expenditure and revenue flows for these loans over their expected life. Using historical data on the performance of individual loans in the assignment program, we developed estimates of loan-servicing costs, acquisition costs, and other costs for all surviving loans over their anticipated life. In addition, we estimated revenues received from loan payoffs, mortgage payments, and the sale of properties after foreclosure. In order to estimate the program’s net loss to FHA, we compared the resulting cost per assigned loan to the average loss that FHA would have experienced on these loans had they gone directly to foreclosure rather than to the assignment program. Our analysis showed that losses on the 68,695 loans assigned to HUD since fiscal year 1989 will be an estimated average of about $49,000 each. We subtracted from the estimated average loss of $49,000 the estimated $27,000 loss that FHA would have experienced had the loans not entered the assignment program, leaving an estimated net loss to FHA of about $22,000 per assigned loan. On the basis of this analysis, we estimate that FHA’s Fund will experience additional losses of about $1.5 billion over what it would have incurred if the loans entering the assignment program since fiscal year 1989 had immediately gone to foreclosure instead. Table 1 summarizes our estimates of the expenses and income associated with the program’s 68,695 loans over their life. The additional costs incurred by FHA are primarily attributable to the partial payments it received on mortgage loans; delays in receiving funds from the sale of the assignment program’s properties that are eventually foreclosed; administrative costs; and advances made by HUD for taxes, insurance, and other expenses. FHA borrowers’ premiums pay for these losses, not the U.S. Treasury. To cover losses, FHA deposits borrowers’ insurance premiums in the Fund. According to 12 U.S.C. 1711, the Fund must meet or endeavor to meet statutory capital ratio requirements designed to achieve actuarial soundness; that is, it must contain sufficient reserves and funding to cover estimated future losses resulting from the payment of claims on defaulted mortgages and administrative costs. To offset substantial losses to the Fund that were incurred in the 1980s, FHA borrowers were required to pay higher insurance premiums beginning in July 1991. In our recent report and testimony on the actuarial soundness of the Fund, we reported that the economic value of FHA’s Fund clearly has improved significantly in recent years but that the Fund as of the end of fiscal year 1993 had not yet accumulated sufficient capital reserves to cover losses during periods of adverse economic conditions as defined by the law. Options are available to the Congress to change the assignment program that would reduce the losses incurred by the program. These options include directing HUD to shorten the 36-month relief period, set a time limit on eliminating delinquencies, and accept into the program only those borrowers who can afford half or more of their mortgage payments. Information provided by officials from four mortgage lending or purchasing institutions indicates that these institutions provide borowers in default a shorter time period to begin full mortgage payments under the original loan or a modified loan and to repay delinquent amounts. They also use techniques different from HUD’s that could improve the effectiveness and reduce the cost of the program. VA usually capitalizes the delinquency and reamortizes the new loan balance (i.e. extends the time period for payment of the loan principal) as soon as it acquires the loan. In addition, VA will reduce the interest rate on the reamortized loan to as low as 3 percent below the current market rate if a reduction is necessary to bring the veteran’s payments to an affordable level. VA may also acquire loans for borrowers who are not able to resume payments immediately if they show the ability to be able to do so in a reasonable period of time. VA field stations have significant discretion in deciding what constitutes a reasonable period; however, it is usually not extended beyond the point at which the loans reach a full year’s delinquency. During this period, VA may provide relief by agreeing to accept payments of less than a full installment or by extending complete forbearance. Fannie Mae and Freddie Mac provide relief for up to 18 months. They may extend this period longer under certain circumstances, but during the relief period, the borrower must eliminate the delinquency. Although RHCDS does not have a specified relief period, an RHCDS official told us that its county supervisors provide short-term relief on a case-by-case basis. Another option for reducing the program’s losses would require borrowers to pay half or more in monthly mortgage payments. We estimate that if all 68,695 borrowers who have entered the program since fiscal year 1989 had paid and continue to pay 50 percent of their original mortgage payments, the program would lose about $433 million more than what would have occurred if the loans had gone immediately to foreclosure, or substantially less than our estimated loss of $1.5 billion. The mortgage payments being made by borrowers as of September 30, 1994, averaged about a third of the original mortgage payments. These borrowers would have to pay 67 percent of their original mortgage payments for the program to break even. In addition to a shorter period of relief, other mortgage assistance institutions stress resolving the delinquency by the end of the relief period. In contrast, the mortgage assignment program gives borrowers many years beyond the relief period to repay a delinquency, as evidenced by some borrowers who have been in the program for 15 years. If the borrower is unable to pay the delinquency within the 3-year relief period, HUD’s regulations require that the borrower must repay the delinquency on or before the mortgage maturity date, but the borrower may be given up to 10 years beyond the maturity date. Freddie Mac, Fannie Mae, and VA also work closely with borrowers to provide long-term solutions, such as modifying the structure of a loan to resolve delinquencies. Officials from these organizations told us that they believe techniques such as refinancing and reducing interest rates to reduce monthly mortgage payments are successful alternatives to costly foreclosure. However, HUD seldom uses its authority to modify borrowers’ mortgage loans. Rather, HUD uses repayment agreements both before and after the 36-month relief period to secure repayment of outstanding delinquencies. These are generally 1-year term agreements based on the borrowers’ estimated income and expenses to repay a debt. HUD field office officials told us that the preparation and monitoring of these agreements requires extensive staff resources. According to HUD’s Director, Single-Family Servicing Division, the primary strategy HUD plans to follow to reduce the program’s losses is to sell its assigned loans and thereby reduce the number of loans it holds and services. In June 1994, HUD sold at auction about 15,000 performing and nonperforming (loans in compliance with repayment agreements and those not in compliance) single-family loans that were not in default when assigned, including 357 loans that were facing foreclosure. FHA received about $12.6 million from the sale of the 357 loans, which represents about 70 percent of the unpaid principal balance on these loans. FHA officials consider these results encouraging and believe that future sales will provide significant relief to field offices that have a large number of assigned loans. FHA plans to sell an additional 15,000 loans in calendar year 1995 and most of the remaining assigned loans over the next 2 years. By fiscal year 1997, HUD expects its inventory to consist only of newly accepted assigned loans that would be held by HUD for a short time before being sold. The purchasers of these loans would be required to comply with HUD’s assignment program’s servicing standards, including permitting 3 years of reduced or suspended mortgage payments. The assignment program operates at a high cost to FHA’s Fund and has not been very successful helping borrowers avoid foreclosure in the long run. The program helps about half of the financially troubled homeowners to avoid foreclosure permanently. However, the costs incurred by HUD to achieve this result exceed the costs that would have been incurred if all assigned loans had gone immediately to foreclosure without assignment. While FHA borrowers’ premiums pay these costs, not the U.S. Treasury, the program’s costs lessen the Fund’s ability to build reserves. Options are available to the Congress to make changes to the program to reduce its losses. The options, such as requiring borrowers to pay more in monthly mortgage payments, would reduce but not eliminate the program’s additional losses. The assignment program would have to require borrowers to begin full mortgage payments within a few months after entering the program in order to nearly eliminate the additional losses incurred by the program. All of these options pose the trade-off of preventing some individuals and families from entering the program who would eventually bring their loans current and/or avoid foreclosure. However, unless changes are made to the present assignment program, its costs will continue to make it more difficult for the Fund to maintain financial self-sufficiency. If the Congress believes that the additional losses incurred by the assignment program are excessive in relation to the number of borrowers that avoid foreclosure, it could consider eliminating the program. However, since some borrowers who default on their FHA mortgages can avoid foreclosure with some assistance, the Congress could consider establishing a short-term, temporary relief program of a few months for such borrowers to replace the mortgage assignment program. If, however, the Congress believes that the borrowers served by FHA’s single-family program are at high risk and therefore in need of additional assistance in the form of forbearance, changes to the program should be considered that would reduce but not eliminate additional future losses. The following are options that the Congress could consider: Require borrowers to (1) resume full mortgage payments within a shorter time period than the 36 months currently allowed and/or (2) eliminate outstanding delinquency amounts within a specified period. For example, the Congress may wish to require that borrowers resume full mortgage payments within 1 year of entering the program and eliminate outstanding delinquencies within 2 years. If borrowers are unable to bring their loan payments current and/or eliminate delinquencies within the specified time, the Congress may wish to consider requiring that HUD foreclose. Require that only borrowers who can pay half their original mortgage amount or more be assigned to the program. We provided a draft of this report to HUD, VA, RHCDS, Fannie Mae, and Freddie Mac officials to obtain their comments. We met with HUD and VA officials and obtained their comments. In a meeting with a HUD Special Assistant to the Assistant Secretary for Housing-Federal Housing Commissioner, HUD’s Director of the Single-Family Servicing Division, and officials from HUD’s Offices of General Counsel and Policy Development and Research, we obtained HUD’s comments. The comments focused on (1) the effects of past litigation efforts on HUD’s management of its mortgage assignment program and (2) alternatives available to prevent foreclosure other than the options we suggest for changing forbearance relief (reducing or suspending monthly mortgage payments for a certain period of time) provided through the assignment program. Specifically, HUD commented that litigation has affected the evolution and operation of the assignment program. According to HUD officials, a consent decree, which the Department entered into in 1979, and litigation preceding and subsequent to entering the consent decree known collectively as the Ferrell v. Pierce litigation have limited HUD’s options to modify the assignment program. The Department believes the Congress needs to understand these limitations when it considers changing the program. Under the consent decree, HUD agreed to, among other things, (1) operate the assignment program for 5 years in compliance with its January 1979 handbook without any modification that would curtail the rights of the mortgagors under the program and (2) after the 5-year period, operate either the present assignment program or an equivalent substitute to help mortgagors avoid foreclosure during periods of temporary financial distress. A series of lawsuits concerning HUD’s implementation of the consent decree followed. We agree that the consent decree and the Ferrell v. Pierce litigation have limited HUD’s options to change the program. It is because of this limitation that the forbearance relief options we present were addressed to the Congress and not to the Secretary of HUD. So that the Congress has a full understanding of the litigation’s effects when considering options to forbearance relief provided through the mortgage assignment program, HUD’s description of the current operation of the assignment program and the effect of past litigation on that program is provided in appendix V. HUD also commented that if the Congress were to consider alternative relief measures for borrowers, there are methods widely used to prevent foreclosure by the private sector that are not discussed in our report. The alternatives to forbearance relief cited by HUD included (1) “modifying defaulted borrowers’ mortgage loans by reducing interest rates, (2) extending the remaining period of the loans, and/or (3) paying partial claims to remedy default with a new obligation from the borrower to repay FHA the amount of the claim.” HUD noted that while our report discusses some relief options used with other federally related mortgages, the options we present to the Congress for change do not include such options. HUD also commented that pursuant to section 918 of the Housing and Community Development Act of 1992, it is studying the adequacy of existing programs authorized to help FHA borrowers avoid foreclosure and alternatives to foreclosure being used with other federally related mortgages. HUD expects to issue this study shortly. We agree that there are alternatives to foreclosure other than the forbearance relief measure provided through HUD’s assignment program. In fact, our report points out that Freddie Mac, Fannie Mae, and VA provide borrowers long-term solutions, such as modifying the structure of their loans to resolve delinquencies. Officials from these organizations told us that they believe techniques such as refinancing and reducing interest rates to reduce monthly mortgage payments are successful alternatives to costly foreclosure. However, this report did not seek to analyze all possible alternatives to the mortgage assignment program because of the focus of our work and our desire not to duplicate HUD’s efforts in studying such alternatives. However, it should be noted that HUD has seldom made use of modified mortgage loans. Consequently, assessing the merits of modifying financially troubled FHA loans to single-family borrowers in lieu of the forbearance that HUD currently provides is difficult. In addition, no matter how successful other alternatives are in avoiding foreclosure, not all borrowers will be able to resume mortgage payments immediately, which is required under such options as refinancing, reducing interest rates, and extending the period of the loan. We recognize, however, that to the extent that such alternatives are effective in helping borrowers retain their homes without entering HUD’s assignment program, they could be a more effective way to avoid costly foreclosure than the current assignment program. HUD’s study on alternatives to foreclosure should be helpful to the Congress in assessing these alternatives. Our report should be helpful to the Congress in assessing changes needed to HUD’s mortgage assignment program to reduce losses on those mortgages that enter the program, regardless of other alternatives that may be used to prevent assignment. While HUD officials agreed that the program’s losses have exceeded those that would have been incurred if loans had gone immediately to foreclosure without assignment, they did not agree with the magnitude of our estimate of the additional cost that FHA incurs. We received no official estimate from HUD of the additional cost, although one HUD analyst said that he believes the additional cost is about one-third of our estimate. HUD currently has a contracted study under way that will produce an estimate of the additional cost to FHA of the program. HUD also provided clarifying information and technical and editorial comments for our consideration in completing our report, which we incorporated where appropriate. VA’s Assistant Director for Loan Management, Loan Guaranty Service, generally agreed with the factual information presented in this report on that agency. We incorporated suggestions by VA to further clarify our report as appropriate. In telephone conversations with RHCDS, Fannie Mae, and Freddie Mac officials, they told us that they agreed with the factual information presented in this report on their organizations and had no further comments. We conducted our work between October 1993 and October 1995 in accordance with generally accepted government auditing standards. Unless you announce its contents earlier, we plan no further distribution of this report until 10 days from the date of this letter. At that time, we will send copies to interested congressional committees; the Secretary of HUD; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others on request. Please call me at (202) 512-7631 if you or your staff have further questions. Major contributors to this report are listed in appendix VI. Concerned about the rising number of loans assigned to the Department of Housing and Urban Development (HUD) and their financial impact, the Chairman, Subcommittee on Housing and Community Opportunity, House Committee on Banking and Financial Services, asked us to determine whether the mortgage assignment program (1) helps borrowers avoid foreclosure, (2) reduces the Federal Housing Administration’s (FHA) losses, and (3) can be improved to reduce losses. To determine whether the program helps borrowers avoid foreclosures, we analyzed information on foreclosures, delinquencies, and borrowers’ compliance with repayment agreements contained in two of HUD’s national information systems—the Single-Family Mortgage Notes Servicing System and the Single Family Insurance System—as of September 30, 1994. Our data reflect nationwide mortgage assignment statistics on single-family loans that entered these systems as section 203(b) mortgage defaults to avoid foreclosure. Loans assigned to HUD for other reasons were not included in our analyses. We did not perform a reliability assessment of controls over the data in the systems; however, we checked our data results through discussions with HUD personnel, making comparisons to related automated accounting and financial reports and reviewing sampled mortgagors’ repayment files. We randomly selected and examined 136 case example assigned loans from four loan categories—paid-off, current with payments, foreclosed on, and delinquent—from files at four HUD field offices—Boston, Chicago, Ft. Worth, and Spokane—to illustrate, among other things, cases in which some borrowers were able to and chose to pay off their mortgages or become current with their payments and others did not. We selected these field offices to obtain geographic diversity to recognize differences in real estate markets. To determine whether the program reduces losses, we used the data systems mentioned above as well as HUD’s Single-Family Accounting and Management System to estimate the foreclosure rates of mortgages assigned since 1989 and revenue and expense flows for these loans over their life. We used this historical mortgage data to estimate loan servicing, acquisition, and other costs of surviving mortgages. We also assessed revenues received from early loan payoffs, mortgage payments, and sales of properties following foreclosure. We further compared the cost per assigned mortgage loan to the average loss experienced by FHA on mortgages that went directly to foreclosure rather than being accepted into the program. A detailed discussion of our methodology for forecasting program foreclosure rates and estimating program costs appears in appendix II. To determine how to improve the program and reduce program losses, we obtained records, reports, and studies from HUD, the Department of Veterans Affairs (VA), Rural Housing and Community Development Service (RHCDS), Federal National Mortgage Association (Fannie Mae), and Federal Home Loan Mortgage Corporation (Freddie Mac) and analyzed appropriate loan servicing guidelines and foreclosure prevention options. We also interviewed HUD (including HUD’s Office of the Inspector General), VA, RHCDS, Fannie Mae, and Freddie Mac officials at their headquarters locations in Washington, D.C., and local HUD officials in Boston, Chicago, Dallas, and Spokane. We also interviewed officials of five organizations concerned with defaulted loans—the Mortgage Bankers’ Association in Washington, D.C., Legal Assistance Foundation, Public Action Housing Policy Center, Community and Economic Development Corporation of Cook County, Inc., and the Spanish Coalition for Housing. This appendix describes the cash flow model we built and the analysis we conducted to estimate the financial loss to FHA’s Fund for program loans assigned during fiscal years 1989 through 1994. We estimated the loss the Fund will incur on the 68,695 loans that entered the program during this period on the basis of assumptions stated in this appendix. To do so, we (1) estimated the costs that FHA has incurred on and revenues it has received from these loans as of September 30, 1994, and (2) forecasted future costs and revenues during the remaining life of these loans. We converted all cash flow estimates to 1994 present values using an annual discount rate of 7 percent. The largest element of cost to the Fund is the cost associated with settling the lender’s claim on the mortgage, a cost that FHA must pay whether or not the foreclosure occurs immediately or the mortgage enters the assignment program. FHA incurs additional costs while loans are in the program, including the administrative costs to operate the program. Revenues received by FHA, including proceeds from the sale of properties following foreclosure and borrowers’ loan payments, partially offset program costs. The following sections of this appendix contain a detailed description of the data we used and how we estimated the costs and revenues associated with the program. In our analysis, we used three of HUD’s computerized databases—the F-60 database that provides current and historical information on all mortgage loans that HUD services under the assignment program, the A-43 database that provides historical information on mortgages insured under the Fund before assignment, and the Single-Family Accounting and Management System (SAMS) database that tracks properties held and eventually sold by HUD following foreclosure. From these databases, we obtained information on the initial characteristics of each loan, such as the year the loan was assigned, the initial unpaid principal and delinquency amount, and the loan interest rate and term. We also obtained information on the current status of each loan, such as the current unpaid balance, the last payment date, and the delinquency status. We categorized the loans as either foreclosed, prepaid, or active as of the end of fiscal year 1994. We estimated the financial losses for program loans by examining all loans by the year assigned. Costs and revenues were computed for each year’s group of assigned loans over the life of the loans in the program. Cash flows out of the Fund when FHA pays (1) lenders’ mortgage claims, (2) taxes and insurance on properties, and (3) salaries and other administrative costs. Cash flows into the Fund when FHA collects revenues from (1) the sale of properties following foreclosure, (2) the early payoff of loans, and (3) payments made by mortgagors (borrowers). All cash flows are discounted at 7 percent to a 1994 base year. We assumed that the net cost to the Fund was partially a function of foreclosure and payoff rates. Other factors that affected costs included the percentages of unpaid principal to original loan amount, receivables due FHA to original loan amount, advances to original loan amount, and the policy year of the loans. In addition, we assumed that FHA would continue to receive partial and delayed payments for some mortgages assigned and that both foreclosure and prepayment behavior will remain the same in future years as it has been in the past. This is a critical assumption because of data limitations. As a result, our analysis does not take into account that the loans assigned from fiscal years 1989 through 1994 may differ from earlier loans in ways that affect their prepayment and foreclosure probabilities beyond 6 years from the date of assignment. Given these assumptions, we projected future loan activity for foreclosures, prepayments, and surviving loans. Because of inadequate historical data, it was not possible to rigorously estimate foreclosure and prepayment probabilities incorporating economic indicators, such as unemployment rates, payment-to-income ratios, current interest rate, and house price appreciation rates. The Fund incurs a number of costs associated with operating the program, including the costs to acquire loans following default, to administer the program, and for property expenses. The largest cost relates to the acquisition of loans before they enter the program. Acquisition costs were compiled for each year’s book of business. The total acquisition costs for all 68,695 loans is about $4.9 billion, about 89 percent of the total cost of $5.5 billion incurred by FHA’s Fund on these loans. Administrative costs include staff salaries for those servicing program loans and other costs related to the program’s application approval process and the processing of defaulted loans for foreclosure. Administrative costs used in our estimates were those developed by the Congressional Budget Office (CBO). CBO estimated the assignment program’s administrative costs and staffing needs—full-time equivalents (FTE)—for each phase of the loan assignment process: assignment requests, endorsements, servicing, and defaulting mortgages. First we used CBO’s estimates for the costs of each administrative function in 1994 to estimate the cost per loan for each function. We then applied this figure to each year’s loan activity to estimate the costs incurred in that year for each function. Next, we used a real discount rate of 3.5 percent per year to convert the estimates to 1994 present values. CBO’s FTE estimates and GAO’s cost per loan and total cost estimates are shown in table II.1, which illustrates that the administrative cost for the 68,695 loans assigned between fiscal year 1989 and the end of fiscal year 1994 totals about $451 million over the life of these loans, about 8 percent of the costs incurred by FHA’s Fund. Total cost (in millions over the life of the loan) Salary costs, which averaged $48,017 per FTE in fiscal year 1994, are used for all FTEs listed. Assignment request costs were allocated to all program loans, although the majority of these costs were for processing loans that were not accepted into the program. Endorsement costs were computed for all 68,695 loans. Servicing costs were applied every year for as long as the loan remained in the program. Default costs were computed for foreclosed loans by year of default. When borrowers are not current on their mortgages, additional costs are often incurred by FHA, including advances for property taxes, insurance, and other costs. HUD makes these payments to ensure clear title to the property and to protect its investment in case of fire. These costs totaled about $100 million, about 2 percent of the costs incurred by the Fund on these loans, and at times are not recovered from the borrower. To estimate the program’s revenues, we recorded the characteristics and status of loans for each year’s book of business. These data were used to estimate ultimate foreclosure and prepayment probabilities of 52 percent and 48 percent, respectively. The conditional foreclosure and prepayment probabilities for each year were based on the actual number of loans that were foreclosed on and paid off between fiscal year 1989 and the end of fiscal year 1994. We estimated these conditional probabilities using data for the 6-year period ended September 30, 1994. These probabilities were for loans entering the program during a 17-year period (fiscal years 1977 through 1994) and represented loan years 1 through 17. We assumed that the conditional foreclosure and prepayment rates for years beyond 1994 (18-30) were the same as for loan year 17. Figures II.1 and II.2 illustrate the estimated conditional foreclosure and prepayment probability rates by loan year. Revenue estimates were based on the percentage of loans in five loan status categories—current, current with forbearance, delinquent with forbearance, delinquent with no forbearance, and pending foreclosure—and their expected performance in the future. For each year’s book of business, we analyzed the unpaid balance to loan amount, the amount of receivables outstanding, the amortized payment amounts, and the actual payments made for each loan category. We also included the amount of advances owed and original loan amounts in the estimates. To estimate foreclosure revenues, an average recovery rate for loans foreclosed and sold was obtained from the SAMS data on 203b loans foreclosed during fiscal years 1983-94. Recovery rates ranged between 43 and 67 percent of acquisition costs each year, averaging 59 percent. The average recovery rate of 59 percent was applied to the acquisition costs of all foreclosed loans. Average acquisition costs were used in estimating foreclosure revenues. Specifically, the average acquisition costs for each year times the recovery rate for each foreclosed loan results in the estimated total foreclosure revenue of about $1 billion, about 48 percent of the $2.1 billion in revenues to be obtained by FHA’s Fund on these mortgages. Prepayment revenues are based on data for all loans. Using the number of loans that paid off and those forecasted to be paid off, the unpaid principal balance at the time of payoff was estimated and summed for all loans, totaling about $955 million, about 45 percent of the revenues to be obtained by FHA. In estimating the unpaid principal balance, we used the ratio of unpaid balance to original loan amount for each year. Using the average loan amount, year in program, and the number of expected prepayments, we estimated prepayment revenues for each year. For years 19 through 30, we assumed that the unpaid balance to original loan amount will continue to decrease at an accelerated rate. To determine the unpaid balance for years 19 through 30, a simple regression was applied to the unpaid balance to original loan amount ratio for years 1-18, in which each year’s ratio is dependent on the previous year’s ratio. The resulting parameters were used to estimate the unpaid balance to loan amount schedule for years 19-30. We forecasted loan payment revenues using the estimated number of loans remaining in the program and the actual and scheduled payments made for each loan category. Actual loan payments averaged about 34 percent of scheduled payments. It was assumed that the assigned loans will have the same distribution over the loan categories that they did in fiscal year 1994 but that the length of time in the program varies. Actual to scheduled payment ratios were also assumed to vary by time in program. As loans age, payment ratios rise, indicating that older loans are paying a higher percentage of scheduled payments. Mortgagors’ total payments for each year through the year 2023 for each year’s book of business were summed to obtain the estimated total payment revenue of about $152 million, about 7 percent of the revenues obtained by FHA’s Fund for loans assigned since the beginning of fiscal year 1989. Approximately 39,600 (55 percent) of the 71,500 borrowers in the program as of September 30, 1994, had been in the program 3 years or fewer. HUD’s records show that of the 39,600 borrowers, 26,000 (66 percent) are current with repayment agreements while the remaining 34 percent are not current. Of the 26,000 borrowers who are current with repayment agreements, 36 percent are current with original mortgage payments. The remaining borrowers (64 percent) are current with repayment agreements that call for reduced or suspended payments. When borrowers remain in the program beyond the 3-year relief period and therefore are required to make full mortgage payments, the proportion of borrowers current with repayment agreements drops and the proportion of borrowers in foreclosure increases. Similarly, the average amount of delinquencies owed by borrowers increases (see figs. III.1 and III.2). Of the approximately 31,900 borrowers who have been in the program more than 3 years and are required to make full mortgage payments, 38 percent are current on their repayment agreements. People buy homes for shelter and investment purposes. Normally, they do not plan to default on a loan. However, conditions that lead to defaults occur. Defaults may be triggered by a number of events: unemployment, divorce, death, etc. These events are not likely to trigger foreclosure if the home can be sold for more than the mortgage balance and selling expenses. However, if the property is worth less than the mortgage, these events may trigger a foreclosure. Prepayments may be triggered by other events such as declining interest rates or rising house prices, which in turn may result in the refinancing or sale of a residence. To illustrate that some borrowers were able to and chose to pay their mortgages while others did not, we randomly selected 136 case example loans from four loan categories—paid-off, current with payments, foreclosed on, and delinquent—from files at four HUD field offices—Boston, Chicago, Ft. Worth, and Spokane. Of the 136 borrowers, 78 had paid off their loans, 34 were current with their mortgages, and 24 had either been foreclosed on, provided HUD with a deed in lieu of foreclosure, or were delinquent on their loans. Borrowers who had been foreclosed on, had given FHA a deed in lieu of foreclosure, or had experienced growing delinquencies were generally unable to resume full payments, and they experienced additional problems after assignment that intensified their financial difficulties. These borrowers generally encountered one or more of the following situations after assignment: (1) intermittent job loss with a reduction in income, (2) reduction in income due to divorce, (3) one or more serious illnesses or injuries, (4) loss of a high paying job and reduced income from a new job, and/or (5) unanticipated housing repairs. Only a few borrowers did not make their mortgage payments because they had high installment debt. While FHA does not keep track of borrowers after foreclosure, FHA loan servicers familiar with foreclosures told us that after foreclosure, borrowers generally either rent an apartment or are able to stay with relatives. Furthermore, program borrowers who experience foreclosures have experiences that are similar to those of FHA borrowers who experience foreclosures immediately without assignment, according to the servicers. However, officials from two housing counseling agencies told us that some borrowers could become homeless after foreclosure. In contrast, the 34 borrowers who were able to become current with their loans generally did not experience such a litany of problems. Although their incomes also declined, they either still had jobs, found new jobs by the time HUD accepted their loans for assignment, or were able to obtain second jobs to supplement their incomes. As a result, 25 (about 73 percent) of the borrowers who became current were able to resume full or increased mortgage payments immediately upon entering the program. Of the 34 borrowers who became current on their loans, 13 cured their delinquencies in less than 2 years. However, the remaining 21 borrowers took 92 months on average to cure their delinquencies. Seven borrowers took over 10 years to become current with their original mortgage payments. Almost all of the 78 borrowers included in our case studies who had already paid their mortgages did so by selling their homes or refinancing their mortgages. Of the 78 borrowers, 71 sold or refinanced their homes, 4 paid mortgages from insurance settlement payments, and 3 paid through regular or increased payments. Borrowers who sold their homes were, on average, 8 months behind in mortgage payments of $4,169 at the time their loans were assigned, which increased to $6,088 when they sold or refinanced their homes. However, the proceeds from the sales were generally sufficient for these borrowers to pay off their original notes and the delinquencies. Generally, these borrowers had either held the properties for more than 10 years or lived in areas where housing had significantly appreciated in value since the homes were purchased. For example, according to a HUD field office official, housing in Spokane has almost doubled in value since 1985. In contrast, the value of homes in the Fort Worth area did not significantly appreciate during this period. Thus, mortgagors in areas where housing had significantly appreciated in value who sold their homes had equity in their homes when they defaulted on their mortgages. Almost half of the 78 borrowers who paid off their mortgages did so within 2 years of assignment, and almost two-thirds did so within 3 years of assignment. Sally S. Leon-Guerrero, Staff Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed whether the Housing and Urban Development's (HUD) mortgage assignment program: (1) helps borrowers avoid foreclosure; (2) reduces the Federal Housing Administration's (FHA) foreclosure losses; and (3) can be improved to reduce such losses. GAO found that: (1) HUD mortgage assignment program helps borrowers avoid immediate foreclosure, but is not successful in helping borrowers avoid foreclosure or retain their homes on a long-term basis; (2) about 52 percent of the 68,700 borrowers in the mortgage program will lose their homes through foreclosure, and the remaining borrowers will pay off their loans after the sale or refinancing of their homes; (3) the mortgage assignment program has not reduced FHA foreclosure losses, since FHA incurs additional costs under the program which more than offset the costs from saving some loans from foreclosure; (4) FHA will incur losses of more than $1.5 billion for those borrowers accepted into the mortgage program since fiscal year 1989; (5) although FHA borrowers' premiums pay for these additional losses, it is more difficult for the single-family insurance program to remain self-sufficient; (6) options that would reduce additional program losses include reducing the 3-year relief period provided to borrowers, setting a time limit on eliminating delinquencies, and accepting borrowers that can pay half or more of their mortgage payment; and (7) FHA would have to require borrowers to begin full mortgage payments within a few months after entering the program to eliminate additional program losses.
The Attorney General established OPR in December 1975 as a response to the ethical abuses and misconduct that DOJ officials committed during the Watergate scandal to ensure that department employees perform their duties in accordance with professional standards. OPR’s mission is to hold accountable department attorneys, and law enforcement agents who work with those attorneys, who abuse their power or otherwise violate the ethical standards required of them by law. OPR has jurisdiction to investigate allegations of professional misconduct when the allegations relate to the exercise of the attorney’s authority to investigate, litigate, or provide legal advice. OPR is headed by a Counsel who is a career Senior Executive Service member who reports to the Attorney General and the Deputy Attorney General. Federal Rules of Criminal Procedure 16 and 26.2, 18 U.S.C. § 3500 (the Jencks Act), Brady v. Maryland, 373 U.S. 83 (1963), and Giglio v. United States, 405 U.S. 150 (1972) establish the discovery obligations of federal prosecutors. According to DOJ policy, it is the obligation of federal prosecutors, in preparing for trial, to seek all exculpatory and impeachment information from all members of the prosecution team to ensure that the defendant has a fair trial. The prosecution team includes, among other others, federal, state, and local law enforcement officers participating in the investigation and prosecution of the criminal case against the defendant. evaluate whether the misconduct at issue is serious, and if so, report it to the appropriate office in DOJ. OPR receives complaints of professional misconduct from a variety of sources, including judicial opinions and referrals, private individuals and attorneys, DOJ employees, and other federal agencies. OPR is to review each complaint, and assess whether an attorney engaged in professional misconduct. If an attorney is found to have engaged in misconduct, OPR is to refer its findings to PMRU or the attorney’s component head to consider disciplinary action. Attorneys who are found to have engaged in misconduct can appeal disciplinary decisions, or submit grievances, to their component head, the Deputy Attorney General, or the Merit Systems Protection Board (MSPB) depending on the component for which they work and the type and length of the discipline imposed. Figure 1 shows DOJ’s process for managing and disciplining professional misconduct. Inquiry. OPR initiates an inquiry when it needs more information to resolve the complaint. In such cases, OPR is to request a written response to the allegations and supporting documentation—such as documents or e-mail records regarding the underlying allegation of misconduct and the attorney’s professional background and experience, among other things—from the attorney who is the subject of a complaint and the component head. OPR may also collect documents and e-mail records, and review case files and court pleadings. Investigations. In cases that cannot be resolved based on a review of the written record, OPR is to initiate an investigation of the alleged misconduct. This includes requesting and reviewing additional relevant documents and conducting interviews of the subject attorney(s) and witnesses. OPR makes findings of professional misconduct only after conducting a full investigation. Discipline. When OPR determines an attorney has engaged in misconduct, OPR provides a written report of its findings and conclusions to PMRU or the attorney’s respective component management for disciplinary action. PMRU is to review OPR’s findings to determine if OPR’s evidence is sufficient to support a finding of misconduct. If PMRU decides to reduce OPR’s findings of misconduct to poor judgment, it is to refer its decision to the attorney’s component management for discipline. Component management may ask the Office of the Deputy Attorney General for the authority to reject OPR’s findings if management disagrees with the findings. However, component management must uphold OPR’s findings if the Office of the Deputy Attorney General denies its request. For instances in which OPR found the attorney to have engaged in poor judgment, OPR is to refer its findings to the attorney’s component management to determine whether discipline is appropriate. Impose discipline. If it is determined by PMRU, component management, or the Office of Attorney Recruitment and Management (OARM) that an attorney engaged in misconduct, component management is responsible for implementing the discipline decided. For attorneys within USAOs and the Criminal Division, PMRU is the Because PMRU speaks proposing and deciding disciplinary office.for the department on such matters, the two components are not at liberty to disagree with PMRU’s decision. For attorneys in other DOJ components, component managers, who handle these matters as only one of their many assigned responsibilities, are the proposing and deciding officials for admonishments, reprimands, and suspensions of 14 days or less. For suspensions of 15 days or more, demotions, and removals, component management is the proposing official and OARM is the deciding official. Component management may disagree with OPR’s findings, but only with the approval of the Office of the Deputy Attorney General. Grievance or appeal. USAO and Criminal Division attorneys who are found to have engaged in misconduct can grieve PMRU’s decision to the Deputy Attorney General for findings in which PMRU imposed a suspension for 14 days or less. For findings in which PMRU imposed a suspension for 15 days or more, these attorneys may appeal to MSPB. Attorneys within other DOJ components who are found by component management or OARM to have engaged in misconduct can submit grievances for suspensions imposed for 14 days or less to a higher level official in their component, and for 15 days or more to MSBP. OPR uses its Analytical Framework to determine whether an attorney engaged in misconduct, and DOJ employees may refer to OPR’s Analytical Framework when determining whether an action constitutes misconduct. Under the Analytical Framework, OPR finds department attorneys engage in professional misconduct when they intentionally violate or act in reckless disregard of an obligation or standard imposed by law, applicable rule of professional conduct, or department regulation or policy. Under the framework, attorneys can also be found to have exercised poor judgment, engaged in other inappropriate conduct, made a mistake, or acted appropriately under all the circumstances. The Privacy Act places limitations on the disclosure of specific findings of professional misconduct and other information that OPR maintains about these cases, such as the name of the attorney found to have engaged in misconduct. OPR, however, discloses summary-level information on its findings in its Annual Report, including descriptions of investigations it closed and its resolution of the matter, without identifying the subject of its investigation. Historically, Members of Congress and other third-party stakeholders, such as the American Bar Association, have stated that they believe that DOJ’s processes for investigating and disciplining professional misconduct are not transparent and prevent attorneys from being held publically accountable for their actions. These long-standing concerns have prompted some Members of Congress to publically call for allowing DOJ’s Office of the Inspector General (OIG) to investigate allegations of professional misconduct so as to better ensure that the public is provided sufficient information on attorney behavior. Currently, in accordance with statute, DOJ’s OIG does not have jurisdiction to investigate complaints of professional misconduct against DOJ attorneys, including complaints against the Attorney General, Deputy Attorney General, and other senior department attorneys; the OIG may otherwise conduct audits and investigations it considers appropriate, including regarding OPR. However, the OIG has partnered with OPR in a few instances to investigate the Attorney General, the Deputy Attorney General, and other high-ranking DOJ officials in professional misconduct investigations because of overlapping issues involving both OIG’s and OPR’s jurisdictions. Legislation that would allow the OIG to investigate professional misconduct complaints has been introduced on numerous occasions, with the most recent legislation introduced in March 2014. General testified in April 2014 before Congress that he does not support any action that would put misconduct investigations under the OIG’s jurisdiction because he believes that OPR has unique expertise for looking at complaints of misconduct and, where appropriate, recommending punishment. On the other hand, the current DOJ Inspector General has criticized the different treatment related to professional misconduct that department attorneys receive from OPR’s oversight, noting that investigating attorneys differently from other department employees has a detrimental effect on public confidence in DOJ’s ability to review its own attorneys’ misconduct. The DOJ Inspector General has also testified in support of OIG jurisdiction over professional misconduct investigations, stating that OIG’s statutory and operational independence from DOJ ensures that OIG investigations occur through a transparent and publicly accountable process. In 1994, GAO issued a legal opinion stating that GAO does not believe an OIG is institutionally less capable of reviewing matters that pertain to discretionary legal judgments, provided the OIG has the necessary experience and expertise to do so. E.g., S. 2127, 113th Cong. (2014); H.R. 3847, 111th Cong. (2009); S. 2324, 110th Cong. (2007); H.R. 9238, 110th Cong. (2007). Other stakeholders have raised concerns about the transparency of OPR’s misconduct investigations. For example, in August 2010, the American Bar Association called on DOJ to release information on completed professional misconduct investigations to give the public confidence that lawyers engaged in serious misconduct are held accountable and to educate the public about the type of complaints that often are made that are unwarranted. Additionally, in March 2013, the National Association of Assistant United States Attorneys called on OPR to make its findings of misconduct inquiries and investigations more accessible and available to Assistant U.S. Attorneys. The Association believes that doing so will allow Assistant U.S. Attorneys to better ensure due process in OPR investigations. However, OPR’s position is that it is prohibited under the Privacy Act from releasing specific information related to its investigations—such as the name of the accused attorney— unless otherwise identified by the act’s routine-use clause. However, according to OPR, this clause allows OPR to share information on its investigations with Congress and for routine-use exceptions, such as for bar disciplinary action or in response to a written request by a judicial officer where it is relevant to the judicial office or the court. To help provide greater transparency of its investigations, OPR provides summaries of its findings in its Annual Reports. OPR has implemented processes to better manage professional misconduct complaints since 2011, and DOJ is taking steps to help address how it identifies and prevents such misconduct among department attorneys. However, DOJ has not implemented its plan to expand the jurisdiction of PMRU to ensure that discipline for professional misconduct is applied consistently and in a timely manner for all department attorneys. Furthermore, not all DOJ components have mechanisms in place to ensure that attorneys found to have engaged in misconduct serve the discipline imposed upon them. OPR has taken steps to increase its timeliness in managing the average 1,000 professional misconduct complaints it receives each year by redesigning its processes for receiving and investigating these complaints. According to OPR’s Deputy Counsel, prior to 2011, OPR could not resolve the number of complaints received in a timely manner because the process it used to assess complaints was time-consuming and inefficient. For example, prior to 2011, OPR used to open many misconduct complaints as investigations rather than inquiries, and most investigations are inherently more time-consuming and costly for the agency because they involve in-depth file reviews and interviews. At that time, OPR also staffed its office in part with attorneys who were on detail from other DOJ components. According to OPR’s Deputy Counsel, because these attorneys were on detail, they lacked the expertise to most efficiently assess and investigate complaints. Often, because of these attorneys’ short tenure with the office, they did not resolve complaints before completing their detail. New attorneys assigned to these matters would restart the inquiry or investigation, which would increase the amount of time it took for complaints to be resolved. The Deputy Counsel reported that, prior to 2011, OPR sometimes took as long as 90 days or more to initially assess complaint allegations and 2 years or more to completely investigate and resolve a complaint. To better ensure the timeliness of complaint review and resolution, OPR redesigned its process to ensure that OPR reviews complaints for merit during an inquiry phase before OPR approves resources for an investigation. According to OPR’s Deputy Counsel, OPR made this change to ensure that it expends staffing resources only for investigations on complaints where there is a reasonable likelihood of a misconduct finding. According to OPR’s Deputy Counsel, OPR currently is staffed with 21 attorneys with experience in investigating professional misconduct allegations to review and investigate complaints and does not have attorneys detailed from other components. The Deputy Counsel said that OPR no longer has to expend additional time and resources training staff on short details so OPR can assess and investigate complaints more quickly. Furthermore, the Deputy Counsel reported that changes to OPR’s intake process have reduced the amount of time it takes OPR to initially assess the merit of a complaint from up to 90 days to approximately 7 days. The Deputy Counsel stated that OPR’s goal is to review a complaint within 1 week of receipt, complete an inquiry within 6 months, and complete an investigation within 12 months. We found that in 2013 it took OPR an average of 3 months to complete an inquiry and 12 months to Figure 2 shows a decrease in the average complete an investigation.time to complete an inquiry from 7 months in fiscal year 2008 to 3 months in fiscal year 2013. The Deputy Counsel attributed this decrease to OPR’s new approach to reviewing and assessing all complaints for merit before it approves resources for an investigation. OPR has taken steps to increase the efficiency of its complaint review process by better focusing its time and resources on those cases where misconduct most likely occurred. In 2011, OPR dedicated one full-time Senior Associate Counsel to determine whether misconduct complaints warrant further review, in part to manage complaints more quickly and efficiently. The Senior Associate Counsel works with three full-time staff to determine the merit of the average 1,000 complaints it receives each The Senior Associate Counsel described the process used to year.assess a complaint. When a complaint comes in, the staff review and assess it against several criteria, such as whether OPR has jurisdiction or another component is more appropriate to manage the issue, whether the complaint includes enough information for OPR to assess it, or whether the courts are still considering the conduct included in the complaint. These kinds of complaints can include, for example, complaints from private individuals about the performance of judges or local or federal law enforcement officers, or numerous complaints from incarcerated prisoners about their treatment while incarcerated—all of which are generally outside of OPR’s jurisdiction. OPR staff propose an initial decision and the Senior Associate Counsel or another supervisory Associate Counsel reviews the decision before taking action. OPR informs the complainant of its decision, and may refer some complaints to the components with jurisdiction for these issues, as appropriate. The Counsel for OPR and Deputy Counsel stated that this process has helped to increase the efficiency of the complaint review process because it allows OPR management to focus its time and resources on those cases where misconduct most likely occurred. The Deputy Counsel explained that the remaining complaints typically include all referrals from judicial decisions or judicial criticism, the Congress, DOJ attorneys, and components, as well as high-profile or significant matters. The Senior Associate Counsel assesses and evaluates all of these complaints to determine whether OPR should accept the complaint for inquiry or investigation. If the Associate Counsel determines that the complaint is outside of OPR’s jurisdiction or does not establish facts that would likely support a misconduct finding, the Senior Associate Counsel notifies the complainant that the matter does not merit further OPR review. For any remaining complaints from these sources, the Senior Associate Counsel prepares a brief memo describing the complaint and applicable circumstances, and recommending that OPR either reject or accept the complaint. The Deputy Counsel or Counsel reviews the memo and must approve the recommendation to reject or accept the complaint. OPR notifies the complainant of any rejected complaints and will open up an inquiry on any complaints that it accepts for further review. According to OPR’s Annual Report for fiscal year 2013, OPR’s review of complaints eliminated approximately 85 percent (693 of 819) of complaints for that fiscal year. OPR’s Annual Report does not provide data on the number of complaints rejected for being outside of OPR’s jurisdiction or for not having sufficient information to support a misconduct finding. However, according to OPR’s Deputy Counsel, OPR’s case management system maintains documentation on each complaint and OPR’s disposition of the complaint. Figure 3 shows how many complaints OPR opened for review and the number it rejected from fiscal years 2008 through 2013. According to the Senior Associate Counsel, he and his staff make decisions to reject matters using the criteria set forth in the Analytical Framework and erring on the side of caution, so as to try to ensure that they do not intentionally dismiss any instance of misconduct. According to Deputy Counsel, he is confident of the Senior Associate Counsel’s decisions because the Senior Associate Counsel has years of experience at DOJ and OPR. According to the Deputy Counsel, OPR management meets with the Senior Associate Counsel twice a month to discuss how to manage incoming complaints. The Deputy Counsel explained that this gives them the ability to monitor the Senior Associate Counsel’s activities and decisions, and the opportunity to discuss complaints of note. Given the relatively high number of complaints rejected and concerns about the transparency of OPR’s process, we considered what steps OPR takes to help ensure supervisory review of the process for evaluating incoming complaints. We found that OPR’s procedures for determining to reject or elevate a complaint are designed consistent with federal internal control standards that call for management to review staff activities to ensure that agency goals and objectives are met. OPR implemented office-wide procedures to help ensure consistency of its professional misconduct investigations. For example, according to OPR’s Deputy Counsel, OPR instructs staff who are investigating complaints of misconduct to develop an investigative plan, which is a roadmap detailing what steps staff will take to resolve the complaint and considers prior investigations to see how OPR handled cases similar in nature. The Deputy Counsel also stated that OPR management discusses with OPR attorneys and supervisors the investigative plan prior to converting the matter to an investigation and as the investigation progresses. According to OPR’s Deputy Counsel, attorneys meet with the Counsel for OPR and the Deputy Counsel as well as their supervisor to discuss the progress of ongoing investigations. Once OPR completes an investigation, OPR senior management reviews all investigative findings before making a determination on whether an attorney had engaged in professional misconduct. The Counsel for OPR approves all findings of professional misconduct before they are closed and referred to either PMRU or the component to determine whether to impose disciplinary action. In addition, the Deputy Counsel said that to ensure the transparency of its decision making, OPR makes its Analytical Framework, as well as its policies and procedures for handling professional misconduct investigations, available to all DOJ employees and the general public. The Deputy Counsel stated that these documents, available on OPR’s website, outline how OPR reviews and investigates complaints, and provide the criteria OPR uses when determining whether an attorney engaged in professional misconduct. In addition, the Deputy Counsel stated that OPR takes steps to notify the public and relevant parties on the results of its findings. For example, OPR sends a letter of its findings to complainants at the conclusion of an inquiry or investigation, and provides investigative reports to DOJ management and relevant state bar associations when appropriate to notify them of the misconduct issues found. OPR generally also allows the attorney who is the subject of an investigation to provide a written defense to OPR’s tentative findings of professional misconduct prior to finalizing a report of investigation. Upon written request, OPR also provides its findings to federal judges who have made rulings criticizing the conduct of DOJ attorneys. Furthermore, OPR’s Annual Reports provide statistics on professional misconduct inquiries and investigations as well as summaries of cases in order to give the public more detail on the types of misconduct engaged in by DOJ attorneys. DOJ faces a number of factors outside of its control when it comes to identifying professional misconduct. For example, according to OPR’s Deputy Counsel, some instances of professional misconduct may go unreported to OPR because attorneys do not deem the behavior significant enough to report. The Deputy Counsel stated that cases of misconduct may not be referred to OPR because the attorney’s supervisor has concluded that misconduct did not occur. Supervisors have available to them the criteria set forth in the Analytical Framework to determine whether allegations and actions constitute misconduct. Furthermore, while supervisors and attorneys are required to report professional misconduct, failure to do so does not necessarily result in a penalty. According to OPR’s Deputy Counsel, DOJ does not have any set schedule of penalties if an attorney fails to report professional misconduct for at least two reasons. First, discipline must be imposed on an individual basis, taking into consideration established factors, and DOJ prefers the flexibility to recommend disciplinary measures on a case-by-case basis rather than being restricted to a set of predetermined penalties. Second, according to OPR’s Deputy Counsel, imposing penalties that are not based upon individual conduct and circumstances may serve to discourage attorneys from reporting misconduct. OPR’s Deputy Counsel stated that OPR, through its training and outreach to employees, continually encourages attorneys to report misconduct. Third-party stakeholders seeking to strengthen oversight of attorneys who engage in professional misconduct also identified factors that stakeholders believe make it difficult for OPR to fully recognize professional misconduct. These factors include, among others, the fear of retaliation for reporting the professional misconduct of colleagues or supervisors, and the presumption that an attorney who willingly engaged in misconduct is not going to report his or her actions to OPR. OPR also has no authority over judges and defense attorneys and OPR cannot compel them to report misconduct when it occurs. However, according to OPR’s fiscal year 2013 Annual Report, OPR receives allegations of misconduct from attorneys and judges. According to OPR’s Annual Report, such allegations constituted approximately 42 percent of all investigations opened, and allegations from department attorneys constituted approximately 46 percent of all investigations opened. OPR is taking actions to help it better identify instances of potential professional misconduct that go unreported. For example, OPR routinely conducts searches of available judicial opinions to help detect potential cases of misconduct that judges and other attorneys do not report directly to OPR. Specifically, OPR utilizes Westlaw—an online legal research database for legal and law-related materials and services—to conduct nationwide searches of available judicial opinions that may indicate criticism of government attorneys or professional misconduct that may not have been reported to OPR. OPR assistant attorneys review the results from the Westlaw searches and determine whether to forward the results for further review.most identify about one case a month where a DOJ employee may have engaged in professional misconduct but did not report it to DOJ. According to OPR’s Deputy Counsel, OPR also reviews media sources on a daily basis, such as newspapers, websites, and internal DOJ publications on recent court cases. According to the Deputy Counsel, when OPR identifies a matter that should have been reported but was not, OPR includes in its inquiry or investigation why the matter was not properly reported and PMRU will take this into account when determining the penalty. In addition, OPR regularly meets with DOJ attorneys to train them on professional responsibility standards, including their responsibility to report misconduct, OPR’s complaint resolution process, and the logistics of referring a misconduct case to OPR. Furthermore, OPR tracks the extent to which any attorneys or supervisors have repeatedly engaged in professional misconduct. According to the Deputy Counsel, OPR reviews the role of supervisors in managing such attorneys accused of misconduct during its inquiries and investigations. DOJ takes several actions to help prevent instances of professional misconduct among department attorneys, with significant efforts devoted to training. For example, in 2009 DOJ created the National Discovery Coordinator (NDC) position to develop, implement, and administer discovery training for department attorneys to address concerns about department attorneys failing to meet their discovery obligations, such as in the Ted Stevens trial. Discovery Blue Book—a comprehensive legal analysis and source of advice on criminal discovery practices—to help attorneys better understand their discovery obligations. Attorneys in the Ted Stevens trial found to have violated their discovery obligations by failing to disclose statements by prosecution witnesses from trial preparation sessions and by failing to disclose information that contradicted prosecutorial evidence, according to a May 2012 DOJ memo. Deputy Counsel, DOJ requires that all litigators take 2 hours of professional responsibility training each year. To institutionalize the department’s efforts to address discovery obligations, DOJ amended the USAM to formalize the requirements for professional responsibility training. In addition, DOJ has established the Professional Responsibility Advisory Office to assist attorneys with questions and concerns related to the attorneys’ ethical obligations. DOJ does not require components to demonstrate that attorneys found to have engaged in professional misconduct serve the discipline imposed upon them. EOUSA—the component that provides administrative support to USAOs—recently developed a mechanism to require USAOs to demonstrate that discipline for professional misconduct has been implemented, but other DOJ components do not have such a mechanism. We reviewed 40 cases for which OPR made a finding of professional misconduct for attorneys within USAOs and the Criminal Division and that PMRU assessed for disciplinary action, from fiscal years 2011 through 2013 (37 USAO cases and 3 Criminal Division cases). According to our analysis of OPR, EOUSA, and Criminal Division data, 16 of these attorneys (40 percent) resigned or retired either before OPR could complete its investigation or before PMRU could impose discipline. At the time of our request, EOUSA had documentation to support the resignations or retirements for 8 of these 16 attorneys (50 percent) but no longer had documentation for the other 8 because of record retention requirements. PMRU decided and imposed discipline for another 22 of the 40 attorneys (19 USAO attorneys and 3 Criminal Division attorneys). Our review found that 1 of these USAO attorneys did not serve the disciplinary sentence imposed until our inquiry uncovered this condition. A representative from EOUSA’s General Counsel’s Office reported that the attorney went undisciplined for 2 years before EOUSA became aware of this situation and took action to ensure that discipline was implemented.PMRU imposed no discipline for the 2 remaining attorneys within USAOs because it found poor judgment in one case and the last case remains pending. EOUSA had documentation showing that components implemented discipline for 17 of these 19 cases while the Criminal Division had documentation for 2 of 3. However, EOUSA could not provide documentation of final actions for 2 cases and the Criminal Division could not provide documentation for 1 case. An official from EOUSA’s General Counsel’s Office reported that at the time of our finding, EOUSA was in the process of revising its procedures for documenting and implementing discipline to better ensure accountability over disciplinary decisions in response to a DOJ OIG audit, completed in February of 2014.requires USAOs to provide documentation to EOUSA’s General Counsel’s Office certifying that the USAO implemented the discipline imposed for any misconduct finding. EOUSA maintains all documentation related to professional misconduct cases in its case management system. This official stated that EOUSA now According to the Associate Deputy Attorney General, other than EOUSA, no other DOJ component has similar procedures or mechanisms in place to ensure that discipline for professional misconduct is implemented. One component reported that it had no need to implement a process because OPR has not found any of its attorneys to have engaged in professional misconduct. While this may be true to date, the component is not prepared to ensure discipline is implemented if OPR does have misconduct findings in the future. Another component reported that when OPR finds that an attorney engaged in misconduct, component management works with the attorney’s supervisor to ensure that discipline is implemented. Nevertheless, this component does not have an internal control in place to be able to demonstrate that the component has implemented the discipline. The Criminal Division reported that it follows up with section management to ensure that they have implemented the discipline PMRU imposed, but does not have specific requirements in place for ensuring that discipline for misconduct is implemented. found that EOUSA was unable to determine whether discipline was implemented or evaluate disciplinary trends among USAOs because of lack of documentation. EOUSA has since taken steps to address OIG’s concerns, according to an EOUSA official. Although these OIG reports focus more broadly on DOJ’s discipline system rather than professional misconduct, they identify systemic issues related to DOJ’s ability to hold employees and attorneys accountable for their actions. Neither DOJ nor component management requires its offices that impose discipline to demonstrate that they actually implemented the discipline, such as by requiring that offices provide components copies of the Standard Form 50 to document personnel action or other documentation that would show the discipline implemented—similar to the mechanism EOUSA recently established in response to OIG’s findings. According to OPR’s Deputy Counsel, OPR tracks all matters in which OPR found misconduct and discipline has not been decided by preparing a quarterly report for senior management. OPR removes cases from its report only once OPR has confirmed with PMRU or the component that discipline has been imposed. However, DOJ does not have a mechanism in place to ensure that component management actually implements discipline once it has been imposed. Federal agencies are required to implement disciplinary systems consistent with federal regulations developed by the Office of Personnel Management (OPM).Deputy Attorney General agreed that requiring components to demonstrate that discipline is implemented is an important step in ensuring that attorneys are disciplined for violations of professional standards, and DOJ could do more to ensure that discipline for misconduct is implemented agency-wide. By requiring that component management demonstrate that it has implemented discipline, DOJ will have better oversight of disciplinary decisions to ensure they are carried out agency-wide. In addition, DOJ’s Associate Disciplinary action is not only punitive, but preventive, as it sends a message to attorneys across DOJ that there are consequences for misconduct. Standards for Internal Control in the Federal Government call for control activities to be established to ensure that management’s directives are carried out. An example of such a control activity is a mechanism for ensuring that discipline for professional misconduct is implemented. Requiring components that impose discipline to demonstrate that they actually implemented the discipline, as EOUSA has required, will help provide reasonable assurance that all attorneys are held accountable for professional misconduct. DOJ has plans to help ensure consistent and timely decisions about discipline for attorneys who OPR finds to have engaged in professional misconduct, but has not yet implemented these plans.DOJ did not provide GAO with reasons for why it has not yet taken action to implement these changes. DOJ plans to expand using PMRU as the official disciplinary component for department attorneys found to have engaged in misconduct from USAOs and the Criminal Division to all litigating components. According to a January 14, 2011 memo from the Attorney General, because the department employees handle disciplinary matters as only one of many assigned responsibilities, disciplinary procedures at DOJ have resulted in delays in completion of the disciplinary process and create the risk of inconsistent application in disciplinary measures for similar offenses. The Attorney General stated that using PMRU—which focuses exclusively on such disciplinary matters—for department attorneys found to have engaged in professional misconduct will help to address these issues. According to the Associate Deputy Attorney General, the department is currently reviewing a memo that would bring all divisions under PMRU’s disciplinary process but has no timetable for implementing these changes. Given that the department has not taken action in almost 4 years on the Attorney General’s original January 2011 memo calling for the change with PMRU, establishing near-term milestones for implementing this change would help to provide the department with some accountability for achieving the Attorney General’s goal. Using milestones as a means for management for meeting established agency objectives is consistent with project management criteria found in A Guide to the Project Management Body of Knowledge. Setting milestones to ensure the needed changes are implemented will help provide DOJ with reasonable assurance that attorneys who OPR finds to have engaged in professional misconduct are disciplined in both a timely and consistent manner. To address concerns that attorneys found to have engaged in professional misconduct received performance awards or a promotion, we asked EOUSA and the Criminal Division to provide us data on the number of attorneys receiving an award or promotion within 1 year of PMRU’s disciplinary decision. We found that DOJ awarded 9 attorneys that PMRU disciplined a division-level or discretionary award for good performance within 1 year of PMRU’s decision. According to DOJ, discretionary awards are of minimal value and an attorney’s division management can approve these awards. Four attorneys within USAOs received a time-off award for an average of about 22 hours while 2 attorneys with USAOs received lump sum cash awards between $1,100 In addition, 2 attorneys from USAOs received both a lump and $2,000.sum cash and time off award. Similarly, 1 Criminal Division attorney received a quality step increase award. None of the attorneys received a department-level award, such as, a Senior-Executive Service, Attorney General, or Presidential Award, all of which are vetted by the Senior Executive Resources Board, a performance review board that is chaired by the Associate Deputy Attorney General and composed of senior DOJ officials. This performance review board vets nominated attorneys to determine if they had misconduct and other performance or behavior- related issues and if these affected consideration for awards. DOJ officials reported that many of the awards that these attorneys received were based on specific performance during a rating year that did not include the conduct that led to discipline. Accordingly, DOJ also stated that there is nothing inconsistent with receiving a performance award for outstanding or exemplary performance, yet previously being disciplined for misconduct. According to DOJ’s Deputy Assistant Attorney General for Human Resources and Administration and Chief Human Capital Officer, DOJ does not have an official policy for granting awards or for recognizing the good performance of attorneys that have been accused of, or found to have engaged in, professional misconduct. The Deputy stated that DOJ components have the discretion to award employees with time-off and cash awards as the components see fit. The Deputy stated that these awards are of modest amounts and serve to boost morale and recognize good performance in a timely manner. Supervisors of attorneys accused of, or found to have engaged, in professional misconduct use managerial discretion when determining what work responsibilities they will assign to these attorneys and ensuring that these attorneys are complying with professional standards. According to an official within EOUSA’s General Counsel’s office, DOJ provides for the use of managerial discretion in dealing with personnel issues to allow supervisors the flexibility in managing the workload and staff. In surveying 20 USAOs and 28 litigating sections, respondents to our questionnaire reported that, in addition to using managerial discretion, they use other agency-wide resources and guidance to assist them in making such decisions. For example, DOJ has general guidance for supervisors, outlined in several administrative directives issued by OARM, on disciplinary actions that they can take to ensure that employees are complying with standards of conduct, including guidance for making determinations about work assignments for attorneys under investigation DOJ also provides guidance to supervisors on how to for misconduct.manage attorney departures from professional standards. For example, the USAM provides guidance on standards of conduct for DOJ attorneys and the U.S. Attorneys’ Procedures help supervisors apply DOJ procedural guidance on a variety of issues, including personnel management. In addition, several DOJ internal offices offer support for supervisors when managing professional misconduct, such as the Professional Responsibility Advisory Office, EOUSA’s General Counsel’s Office, and OPR. Over half of our respondents,12 of 20 USAOs and 20 of 28 litigating sections, have had experience in managing attorneys accused of, or found to have engaged in, professional misconduct. These respondents reported that they assign work to attorneys on a case-by-case basis but based on a variety of factors, including the following: The nature of the alleged misconduct. Sixteen USAOs and 19 litigating sections reported that they consider the seriousness of the nature of the possible misconduct when assigning work responsibilities or the circumstances contributing to the allegation or finding of misconduct. Trust. Seven USAOs and 4 litigating sections said that they determine the extent to which the attorney can be entrusted with responsibility for conducting investigations and prosecutions. Nature of available work assignments. Five USAOs and 11 litigating sections reported that before making a determination about what to assign an attorney found to have engaged in professional misconduct, they determine whether the assignment relates to a complaint of misconduct that OPR is investigating. Attorney skill set and previous experience. Ten USAOs and 5 litigating sections reported that they consider the attorney’s prior performance when determining work assignments, including whether the attorney had a history of misconduct. One USAO reported that if the offending attorney had a personal difficulty, such as a death in the family that might have contributed to the misconduct, the office might approach the finding as a one-time error and continue to assign the attorney important cases but with closer supervision in light of the OPR finding. According to the Associate Deputy Attorney General, USAOs also consider the availability of resources when assigning work assignments. The Associate Deputy Attorney General reported that USAOs will often assign attorneys to cases based on risk level and will use their professional judgment to determine whether an attorney accused of, or found to have engaged in, professional misconduct poses risk to a case when determining work assignments. The remaining 8 USAOs and 8 litigating sections reported that they did not have experience in managing attorneys accused of, or found to have engaged in, professional misconduct. Nevertheless, 7 of these USAOs and 4 of these litigating sections provided a variety of hypothetical examples of how they would assign work responsibilities to these attorneys and cited factors they would consider when doing so, which were similar to those discussed above. Finally, 1 of the USAOs and 4 of the litigating sections did not have experience in managing attorneys in these situations and did not provide information on what factors they would consider when making such determinations. In addition to the factors cited above, respondents reported other issues they consider when assigning work responsibilities. For example, they may not assign work responsibilities to attorneys accused of professional misconduct any differently than other attorneys until they consult with EOUSA’s General Counsel’s Office or component management. Specifically, one litigating section reported that it does not normally take ongoing OPR investigations into consideration when making work assignments unless the investigation casts doubt on the attorney’s ongoing capacity to practice law on behalf of the federal government. Respondents reported that they may also not consider allegations of professional misconduct when assigning work responsibilities if they determine that the allegation may lack merit. For example, one litigating section reported that whether a pending OPR review should influence work assignments depends, in part, upon the office’s evaluation of the nature, seriousness, and merit of the allegation and the likelihood it may lead to an OPR finding of professional misconduct. Finally, they may not alter work assignments in situations where an attorney did not engage in misconduct but did engage in other types of poor behavior, such as committing negligent conduct or making a mistake. The respondents said that they believe these types of issues are best addressed through training and closer supervision. According to respondents, DOJ provides guidance and training—such as through the USAM and discovery training—to help ensure that attorneys are abiding by professional standards. Furthermore, all 20 of the USAOs and 26 of 28 litigating sections identified a variety of factors they use to help ensure that supervisors are providing adequate oversight of their attorneys who have been accused of, or have been found to have engaged in, misconduct. These include, among others, requiring that management routinely discusses any performance or conduct issues with staff attorneys and takes corrective actions accordingly; management reviews written products; supervisors meet regularly with staff attorneys to review the status of their cases or to routinely assess staff performance; and supervisors take training on how to provide adequate oversight of attorneys’ work responsibilities. Furthermore, 5 respondents reported that they routinely meet with judges to address any concerns that arise regarding an attorney’s professional responsibilities and obligations during the course of litigation. For example, one USAO reported that the U.S. Attorney and other senior management attend the quarterly meetings of the U.S. Magistrate Judges in the district. According to this USAO, these meetings provide feedback from the judiciary as to how the office is doing in handling cases before the Magistrate Judges, and a venue to raise any problems or issues, such as concerns about attorney conduct. Under departmental policy, DOJ is not to authorize legal representation for purposes of defending attorneys in proceedings that OPR conducts because it is generally not in the interests of the United States to provide federal employees with legal representation in internal agency This policy also precludes legal administrative investigations.representation to assist employees in preparing submissions to support their defense in internal disciplinary investigations, or to represent employees in agency disciplinary proceedings, including those that OPR conducts. Federal law provides the Attorney General with the authority “to attend to the interests of the United States.” 28 U.S.C. § 517. See also 28 U.S.C. § 516 (providing for the Attorney General’s authority to conduct litigation “in which the United States, an agency, or officer thereof is a party, or is interested”). DOJ policy statements concerning individual capacity representation are found at 28 C.F.R. §§ 50.15-50.16. be “in the interest of the United States.”employee acted within the scope of employment, DOJ’s starting assumption is that it is in the interest of the United States to provide representation. According to DOJ, if an According to DOJ officials, DOJ has long recognized that it serves the government’s interest to represent federal employees who may face personal liability, or a lawsuit, as a result of fulfilling their work responsibilities, even where the employee has made a mistake but was acting in good faith attempting to perform federal duties. As a result, in certain instances, DOJ authorizes legal representation for DOJ attorneys who are involved in legal proceedings for certain actions because the attorneys were acting in their capacity as federal employees, even where they are involved in a concurrent OPR or other internal investigation for the same actions. For example, according to DOJ, assuming that an attorney’s request for representation meets the criteria set forth in DOJ’s policy statement, DOJ could provide representation for a DOJ attorney who is the subject of a state bar proceeding while that attorney is also the subject of an OPR investigation related to the same conduct. However, representation would be limited to the state bar proceeding, not for defense in the OPR investigation. According to the Director of the Constitutional and Specialized Torts Litigation Section (CSTL) within DOJ’s Civil Division—the primary section that authorizes legal representation for federal employees—internal In addition, investigations are relatively common in high-profile matters.the Director said that DOJ does not assume that an internal investigation will find that an employee has committed misconduct and, therefore, will not automatically withhold representation from an attorney who is also under investigation. Under its policies, before authorizing representation in a case where there appears to exist the possibility of an OPR investigation of the same subject matter, CSTL (or the relevant litigating division) contacts OPR and the relevant prosecuting divisions within DOJ to determine whether there is an open OPR investigation relating to the matter for which representation is sought. CSTL (or the relevant litigating divisions) also contacts OPR and the relevant prosecuting components within DOJ to determine whether the employee requesting representation is also the subject of a federal criminal investigation or a defendant in a criminal case. DOJ can authorize either direct representation—through a DOJ attorney—or private counsel representation. Direct representation is the most common form of legal representation, and DOJ provides this as the default. During fiscal years 2008 through 2013, DOJ provided direct representation in more than 5,300 matters. However, where there is a conflict of interest among defendants, among other circumstances, DOJ may pay for private counsel representation at DOJ’s expense to ensure that each defendant receives appropriate representation for his or her specific circumstances. We determined that DOJ expended $3.66 million from fiscal years 2008 through 2013 for private counsel representation for 38 DOJ attorneys, in headquarters or in an USAO, involved in 18 legal proceedings where there were also related OPR investigations. This amount was about 23 percent of the total $16.1 million that DOJ expended for private counsel during this time period for all matters in which at least one DOJ employee was represented. Costs for private counsel representation across the federal government during this time period totaled $25.5 million. In the related OPR investigations, DOJ found 12 attorneys to have engaged in professional misconduct. Situations can arise in a variety of circumstances where DOJ authorizes representation through private counsel at DOJ’s expense, and there is also a related OPR investigation. Ten of the 18 proceedings we identified that had a related OPR investigation involved allegations of the failure to disclose certain required evidence in the discovery process. For example: In one instance, DOJ authorized private counsel at DOJ’s expense for a department prosecutor involved in a state bar proceeding related to allegations that the prosecutor had failed to disclose that a victim stated he did not see who shot him. The bar recommended a 30-day suspension, but the final decision remains pending. Given the bar’s involvement, as well as the fact that the attorney left DOJ, OPR closed the matter as an inquiry. In another case, the trial court judge found that a number of prosecutors had, among other things, filed a superseding indictment in bad faith and failed to disclose evidence regarding cooperating witnesses. The court of appeals rejected the original finding, among other things, that the trial court had violated the constitutional right to due process of the two lead prosecutors by sanctioning them without notice to rebut the charges against them. Upon remand of the case to the trial court, no further disciplinary proceedings were initiated. The departmental investigation of these attorneys ultimately found that 1 had exercised poor judgment. In another instance, DOJ authorized private counsel representation at DOJ’s expense for a prosecutor to respond to a court order to show cause why sanctions should not be imposed for the failure to disclose exculpatory evidence.the violation was unintentional and the prosecutor was unlikely to commit comparable errors in the future, so the court decided not to impose sanctions. A related OPR investigation determined that 1 attorney involved in this case had engaged in professional misconduct in reckless disregard of the attorney’s obligations and PMRU imposed a suspension. The manner in which DOJ attorneys exercise their decision-making authority has far-reaching implications, in terms of justice and effectiveness in law enforcement. Ensuring that federal attorneys are held accountable when they do not meet their professional obligations is important for providing the public with assurance that those contributing to the fair administration of federal laws are not impairing the government’s law enforcement efforts and are acting as good government stewards. DOJ has taken actions to help better manage its process for receiving and investigating complaints of professional misconduct, but continues to face a number of factors outside of its control when it comes to identifying misconduct. To help address these factors, DOJ has implemented a variety of training programs for its attorneys and implemented procedures to help detect instances of misconduct that go unreported. However, until DOJ consistently ensures that all attorneys found to have engaged in misconduct are appropriately disciplined, DOJ cannot effectively address violations of professional standards. By requiring that components demonstrate they actually implemented the discipline imposed for misconduct, DOJ can help provide Congress and the public reasonable assurance that professional misconduct does not go unaddressed. Furthermore, by requiring that DOJ establish near-term milestones for expanding PMRU’s jurisdiction to all department attorneys, DOJ can better ensure that it is addressing violations of professional misconduct for all department attorneys in a timely and consistent manner. To help provide Congress and the public with reasonable assurance that attorneys found to have engaged in professional misconduct are disciplined, and prevent delays in implementing this discipline, we recommend that the Attorney General take the following two actions: require components that impose discipline to demonstrate that they actually implemented the discipline—similar to EOUSA’s requirement, and establish near-term milestones that will hold the department accountable for completing its goal to expand PMRU’s jurisdiction to all department attorneys found by OPR to have engaged in professional misconduct. We provided a draft of this report to DOJ for review and comment. On November 26, DOJ’s Audit Liaison Group informed us via email that the department concurred with our recommendations. In terms of our recommendation on expanding PMRU’s jurisdiction, DOJ reported that even though the department has not taken steps to do this, the change is under active consideration. DOJ also provided technical comments, which we incorporated in the report as appropriate. We are sending copies of the report to the Attorney General of the United States and appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. 1. To what extent does the Department of Justice (DOJ) have processes to manage complaints of professional misconduct to discipline attorneys for findings of misconduct, and that advise on performance awards for these attorneys? 2. How do supervisors determine work responsibilities for attorneys accused of, or who have been found to have engaged in, professional misconduct? 3. What are DOJ’s policies for paying or reimbursing the attorneys fees and costs of departmental employees in actions relating to allegations of contempt of court or prosecutorial misconduct, and what is the extent to which DOJ is paying for such costs? To address our first objective, we reviewed DOJ guidance related to establishing and overseeing attorney standards of conduct, including ethical conduct, such as outlined in the U.S. Attorneys’ Manual (USAM) and published regulations. We also reviewed previous GAO and DOJ Inspector General reports on DOJ’s processes for managing professional misconduct and disciplining attorneys. We assessed federal and agency-wide policies establishing DOJ’s processes for identifying, investigating, and disciplining professional misconduct, including OPR’s Analytical Framework, which provides guidance on the types of behavior identified as professional misconduct, and how OPR conducts investigations into misconduct. We compared OPR’s process for supervisory review of professional misconduct complaints with internal control standards to ensure that OPR management was providing sufficient management oversight over the receipt and review of misconduct complaints. We reviewed DOJ educational resources available to assist attorneys in meeting their professional responsibilities, as well as requirements for training related to professional responsibility the department required of its attorneys. We reviewed OPR complaint data from fiscal years 2008 through 2013 in order to describe what, if any, changes occurred in the number of complaints and the length of time to complete inquiries and investigations since DOJ’s process for managing complaints of professional misconduct changed in 2011.We reviewed OPR complaint data from fiscal years 2008 through 2013 in order to describe what, if any, changes occurred in the number of complaints and the length of time to complete inquiries and investigations since DOJ’s process for managing complaints of professional misconduct changed in 2011. We reviewed internal DOJ personnel and disciplinary documentation for 40 cases that the Office of Professional Responsibility (OPR) investigated for professional misconduct between fiscal years 2011 and 2013 to determine the discipline imposed upon these attorneys, and compared DOJ’s practices for documenting disciplinary actions with internal controls. We reviewed these cases because they were the first and only cases of professional misconduct, at the time of our review, for which the Professional Misconduct Review Unit (PMRU) has jurisdiction to review and assess for disciplinary action. For each case, we reviewed internal DOJ personnel and disciplinary documentation to determine the discipline imposed upon attorneys found to have engaged in professional misconduct and the extent to which DOJ implemented disciplinary decisions when attorneys were found to have engaged in professional misconduct. We did not test whether discipline imposed was consistent across offenses because the type and length of discipline is dependent upon DOJ’s professional judgment. For these attorneys we also determined whether DOJ had provided a performance award or promotion to them within 1 year of PMRU’s disciplinary decision. To identify what constituted a performance award we used criteria provided by the Office of Personnel Management (OPM), which allows agencies to provide one of four types of awards to federal employees: lump-sum cash awards, honorary awards, informal recognition awards, and time-off awards. We also reviewed DOJ-provided data to determine whether these attorneys had received any DOJ-specific awards. In addition, we interviewed the Deputy Assistant Attorney General for Human Resources and Administration to determine how DOJ decides which employees are eligible to receive an award. We also interviewed senior-level DOJ officials within OPR, PMRU, the Criminal Division, the Executive Office for U.S. Attorneys (EOUSA), and the Professional Responsibility Advisory Office, and the National Discovery Coordinator to obtain their views on how DOJ manages complaints of professional misconduct and identifies actions DOJ has taken to help deter departures from professional standards. We interviewed and reviewed the literature produced by a variety of third-party stakeholders, such as advocacy groups and academics, to obtain information on their perspectives on DOJ’s efforts to address professional misconduct. Interviews with these stakeholders cannot be generalized. However, they provide valuable insights about DOJ’s abilities to effectively identify and address professional misconduct within the department. To address our second objective, we sent a questionnaire to 20 selected U.S. Attorneys’ Offices (USAO) and 28 litigating sections within selected DOJ components to collect information on the various types of policies and procedures put in place to manage the work activities of attorneys accused of or found to have engaged in professional misconduct. We selected USAOs because officials from EOUSA’s General Counsel’s Office stated that attorneys within the USAOs would be in the best position to discuss management of the work activities of attorneys alleged or found to have committed professional misconduct. We selected litigating sections to provide additional examples of how the department manages the work activities of attorneys alleged or found to have committed professional misconduct. To ensure that we obtained information across USAOs with varying workloads, we ordered DOJ’s 93 USAOs by size using office case workload hours provided in the U.S. Attorneys’ 2012 Statistical Report, divided these into quartiles, and randomly selected 5 USAOs within each quartile. We selected litigating sections within DOJ components that have experience managing attorneys subject to a complaint of professional misconduct between fiscal years 2008 and 2013.to the Criminal Division, Civil Rights Division, Tax Division, Environment and Natural Resources Division, Antitrust Division, and the Civil Division. Because we used a nongeneralizable sample, our findings cannot be used to make inferences about other USAOs or DOJ components. We received responses from all 20 USAOs and 28 litigating offices within DOJ components. We did not independently verify the data reported by offices in the questionnaire; however, we interviewed senior-level officials with EOUSA’s General Counsel to assess the reasonableness of the data reported. We believe the data are reliable for our purposes. We also interviewed officials within EOUSA to identify what challenges may arise when managing attorneys are accused of, or who have been found to have engaged in, professional misconduct, and to determine how they manage attorneys’ work assignments. Using these criteria, we sent questionnaires To address our third objective, we analyzed DOJ’s policies for providing legal representation to federal employees as outlined in 28 C.F.R § 50.15 and 28 C.F.R § 50.16. We assessed agency-wide policy guidelines identifying the circumstances under which federal employees are eligible to receive representation by private counsel at DOJ expense. We collected data from the Constitutional and Specialized Torts Litigation Section (CSTL) of the Civil Division— the primary section that authorizes legal representation for federal employees and maintaining data on these requests—and the Civil Division’s Office of Planning and Budget Evaluation, on the number of cases for which DOJ approved legal representation for federal employees, between fiscal years 2008 and 2013. We also collected cost data from DOJ on the total amount the DOJ paid to provide legal representation by private counsel to federal employees between fiscal years 2008 and 2013. We did not collect cost data from DOJ on the amount it expended to provide direct representation because of the time and difficulty required of DOJ to collect this data. We did, however, collect cost data from DOJ on the amount expended for private counsel because DOJ keeps receipts for expenditures made to private counsel firms. We assessed the reliability of both sets of these data by interviewing staff within CSTL and the Civil Division’s Office of Budget. We concluded that these data were sufficiently reliable for the purposes of this report. To determine the amount paid by DOJ for private counsel representation for its attorneys where there was a related OPR investigation, we asked CSTL to provide data on matters where private counsel representation at DOJ expense was provided to attorneys in a main justice component or USAO, for fiscal years 2008 through 2013. OPR also provided information for each of these matters as to whether there was related OPR inquiry or investigation; OPR may not have investigated all persons to whom representation was granted. To determine the nature of the matters—including whether they involved the failure to disclose certain required evidence in the discovery process—we reviewed bar disciplinary decisions, docket sheets, judicial opinions, and other publically available documents describing allegations of professional misconduct related to these matters. We conducted this performance audit from September 2013 to December 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Dawn Locke (Assistant Director); Sara Margraf (Analyst-in-Charge); Wendy Dye; Lorraine Ettaro; Eric Hauswirth; Jonathan Hutto; Tracey King; Linda Miller; Jessica Orr; Tovah Rom; Joseph Som-Pimpong; and Janet Temko-Blinder made key contributions to this report.
Instances of professional misconduct—such as a violation of an attorney’s responsibilities to be honest—among DOJ attorneys have called into question DOJ’s efforts to oversee attorney behavior, including its processes for investigating and disciplining misconduct complaints. Congress mandated GAO to review DOJ's performance in disciplining attorneys. This report addresses (1) DOJ's processes to manage misconduct complaints; (2) how supervisors determine work responsibilities for attorneys accused of, or found to have engaged in, misconduct; and (3) DOJ's policies for paying for representation for attorneys investigated for misconduct. GAO reviewed DOJ regulatory obligations and policies, and legal representation costs from fiscal years 2008 through 2013. GAO also analyzed survey responses on assigning work responsibilities from 48 selected litigating sections. Responses are not generalizable, but provided helpful insights. GAO also interviewed DOJ officials who manage misconduct complaints. The Department of Justice (DOJ) has made changes to improve its processes for managing complaints of attorney professional misconduct since 2011 but has not implemented plans to improve processes for demonstrating that discipline is implemented, or achieving timely and consistent discipline decisions. For example, GAO found that changes to the Office of Professional Responsibility's (OPR) processes for assessing the merits of misconduct complaints reduced assessment time that took up to 90 days in 2008 to about 7 days in 2014. However, GAO found that DOJ does not require its components to demonstrate that attorneys have served the discipline imposed on them for misconduct. Ensuring that discipline is implemented helps hold attorneys accountable for violating professional standards and provides the public reasonable assurance that misconduct is being addressed. DOJ also has not implemented a change called for in a January 2011 memorandum from the Attorney General that would expand the purview of the Professional Misconduct Review Unit (PMRU)--the unit that proposes and decides discipline for attorneys with findings of misconduct by OPR. With this change, PMRU would go from deciding discipline for attorneys with professional misconduct findings in U.S. Attorneys' Offices (USAO) and the Criminal Division to all components. According to the Attorney General, this change could help reduce delays in implementing discipline and ensure consistent decisions about discipline. DOJ did not provide GAO with reasons for not making this change. DOJ policy provides that supervisors of attorneys accused of, or found to have engaged in, professional misconduct can use discretion to determine what work to assign to these attorneys. DOJ also provides agency-wide guidance to supervisors, such as administrative directives and the U.S. Attorneys' Manual, that identify steps supervisors may take when dealing with attorneys accused of misconduct. Representatives for 12 of the 20 USAOs and 20 of the 28 litigating sections we surveyed reported that supervisors assign work on a case-by-case basis but consider factors, such as the nature of the alleged misconduct, in doing so. A smaller number of respondents reported that supervisors may assign work to such attorneys no differently than to other attorneys until the supervisors determine allegations have merit or professional misconduct is confirmed. Under departmental policy, DOJ is not to authorize legal representation for attorneys in OPR proceedings, including representation to assist such attorneys in preparing submissions to support their defense. However, DOJ attorneys, like all federal employees, may be provided legal representation by DOJ for carrying out their duties, under certain circumstances. For example, DOJ may provide representation for an attorney whose conduct is the subject of a state bar proceeding while the attorney is also the subject of an OPR investigation related to the same conduct. The representation would cover defense for the state bar but not the OPR proceeding. As a result, from fiscal years 2008 through 2013, DOJ expended $3.66 million for private counsel representation for 38 DOJ attorneys involved in 18 legal proceedings where there were also related OPR investigations. DOJ found 12 attorneys within these investigations to have engaged in professional misconduct. GAO recommends that DOJ (1) require components to demonstrate that they have implemented discipline for misconduct and (2) establish near-term milestones for expanding PMRU's jurisdiction to decide discipline for all attorneys with findings of misconduct.DOJ agreed with GAO's recommendations.
HUD provides mortgage insurance on more than 13,000 privately owned multifamily properties under various programs designed to help low- and moderate-income households obtain affordable rental housing. In recent years, HUD had experienced a significant growth in the number of defaulted multifamily mortgages because of financial, operating, or other problems. As of July 1993, HUD held more than 2,400 mortgages with unpaid principal balances totaling about $7.5 billion, more than 2,000 of which were assigned to HUD as a result of default. HUD’s Federal Housing Administration (FHA) insures mortgage lenders against financial losses in the event owners default on their mortgages. When a default occurs, a lender may assign the mortgage to HUD and receive an insurance claim payment from the agency. HUD then becomes the new lender for the mortgage. HUD’s policy is to attempt to restore the financial soundness of the mortgage through a workout plan. If a workout plan is not feasible, HUD may, as a last resort, initiate foreclosure in order to sell the property and recover all or part of the debt. If HUD is unsuccessful in selling a property at a foreclosure sale, it may acquire ownership of the property. HUD retains these properties in its “HUD-owned inventory” until it can sell or otherwise dispose of them. The Housing and Community Development Amendments of 1978 (12 U.S.C. 1701z-11), as amended, required that in disposing of properties, HUD preserve a certain number of units as affordable housing for low-income households. To accomplish this requirement and to ensure that units remain affordable to eligible households, HUD normally uses a federal rental subsidy program called section 8 project-based assistance. Under this program, households do not have to pay more than 30 percent of their adjusted income for rent. Through contracts with HUD, owners are then reimbursed the difference between a unit’s rent and the portion paid by the renter. HUD’s ability to sell a large number of foreclosed properties while preserving affordable units for low-income households was significantly impeded by a shortage of federal funds needed to support section 8 project-based contracts. As a result, in some cases, HUD assumed ownership of the properties rather than sell them to other purchasers at foreclosure sales. HUD then operated these properties until funding for section 8 was available. To facilitate the sale of some properties, in 1991 HUD started using alternatives to providing section 8 project-based assistance that were allowed by the property disposition legislation. These alternatives included getting the purchaser to agree to keep the required number of units available and affordable to lower-income persons for 15 years and to charge occupant households no more than 30 percent of their income for rent. Under this procedure, HUD required new owners, as well as any subsequent owners, to set aside the same number of units that they would have been required to allocate for the section 8 program. Purchasers agreed to fill these rent-restricted units with tenants meeting the same household income eligibility criteria as used in the section 8 program. Use of the rent-restriction approach was limited to properties that, at the time HUD paid off the mortgage lender, were not receiving any HUD subsidy (such as a below market interest rate loan) or were receiving rental assistance payments for fewer than 50 percent of their units. HUD generally assumes that because occupants will pay no more than 30 percent of their adjusted household income toward the rent, the owner’s rental income would be reduced on the rent-restricted units. Accordingly, HUD adjusts the minimum bid prices it is willing to accept on the properties downward to the point that the properties should have a positive cash flow even if the owner received no rental income on the rent-restricted units. Because rent-restricted units can reduce a property’s cash flow, HUD has found that the rent-restriction procedure is usually financially feasible only when a relatively small proportion of a property’s total units (usually no more than 10 percent) have rent restrictions. Through December 1994, HUD had used the rent-restriction alternative in the sale of 62 properties, or about 17 percent of the properties sold. The 62 properties contained 10,595 units, of which 1,344 were rent-restricted units. HUD’s instructions for disposing of multifamily properties did not provide HUD field offices or purchasers of HUD properties with clear directions for implementing the rent-restriction alterative. Field offices therefore made different judgments as to what requirements should apply—particularly whether or not properties should be subject to certain rules and practices that had been used in connection with the section 8 project-based rental assistance program. Consequently, field offices incorporated different, sometimes conflicting, requirements into sale documents and accompanying deed restrictions. HUD first issued instructions for implementing the rent-restriction approach as part of a July 1991 notice prescribing procedures that field offices were to use in selling defaulted mortgages at foreclosure sales. (These instructions did not apply to sales of HUD-owned properties.) The notice described the conditions under which rent restrictions could be used, the length of time the restrictions were to remain in effect at each property, and the limitations on tenants’ rents. The notice also included two, slightly different standard-use agreements that HUD used in writing sales contracts for properties sold at foreclosure. One agreement was to be included in sales contracts when HUD was also requiring that the purchaser perform repairs to a property after the sale; the other was to be used when HUD was not requiring the purchaser to perform post-sale repairs. Both agreements required purchasers to maintain a specified number of units as affordable housing for 15 years and to limit what households pay toward rent to no more than what they would be charged under the section 8 project-based rent subsidy program. Both agreements also required purchasers to follow certain procedures that were required under the section 8 project-based program. First, purchasers had to maintain waiting lists of eligible applicants and fill vacant restricted units on a first-come, first-served basis but give preference to applicants who were involuntarily displaced, living in substandard housing, or paying more than 50 percent of their household income for rent. Also, both agreements required purchasers to annually verify the income of households occupying restricted units using procedures similar to those used in the section 8 project-based program.While neither of these procedures was specifically required by the property disposition legislation, HUD field office officials believed that they were appropriate because they help ensure that proper controls are used in the management of rent-restricted properties. Moreover, several officials believed that the procedure for filling vacancies is beneficial because it can place more of the cost of providing affordable housing on property owners since it essentially requires the owners to accept low-income households on a first-come, first-served basis even if they would not pay the full rental cost. The primary difference between the two agreements was that the agreement for properties without post-sale repair requirements stated that rent-restricted units could not be occupied by households that continued to possess a section 8 voucher or certificate after occupancy. Several of the field office officials we talked with said that this requirement was appropriate because they believed that rent-restricted units were intended to serve unassisted households. In September 1992, HUD issued instructions for the sale of HUD-owned properties. These instructions, however, differed from the 1991 instructions in that the use agreements only required that purchasers restrict rents on the specified number of units for 15 years and limit rents paid by the occupants to what would be charged under the section 8 project-based program. The use agreements did not require waiting lists or annual income verification procedures and did not prohibit occupancy by section 8 voucher or certificate holders. Thus the agreements gave purchasers greater latitude in filling vacancies—essentially allowing them to exclude a household from their rent-restricted units if the renter could not pay the full rent, either directly or through a rent subsidy assigned to the household. In June 1993, HUD replaced the 1991 and 1992 instructions with instructions that applied both to properties sold at foreclosure and to HUD-owned properties. The use agreements included in the 1993 instructions were essentially the same as the 1992 use agreements with respect to requirements for rent-restricted units. The 1993 instructions thus eliminated any specific requirements for (1) filling vacancies from waiting lists on a first-come, first-served basis; (2) verifying household incomes; and (3) prohibiting section 8 voucher and certificate holders from occupying rent-restricted units. Property disposition officials told us that these changes were made to reduce government regulation and to delegate more authority to field offices. In September 1993, HUD’s Office of General Counsel (OGC) specifically directed field offices to discontinue use of the 1991 use agreement that prohibited section 8 voucher or certificate holders from occupying rent-restricted units. Although field offices had approved sales contracts containing the 1991 use agreement, the OGC subsequently concluded that excluding voucher and certificate holders violated section 204 of the Housing and Community Development Amendments of 1978. (Section 204 prohibits property owners from unreasonably refusing to lease units to anyone simply because he or she held a section 8 voucher or certificate.) HUD headquarters officials told us in November 1994 that the difference in use agreements for rent-restricted units since 1991 occurred unintentionally. The officials said that because of the relatively few properties sold with rent restrictions, they considered the instructions to be a low priority and thus had given them little attention. The officials said that there is no reason why requirements for rent-restricted units should differ because of the type of sale or because post-sale repairs are required. The officials also told us that after discussing the lack of guidance with us in June 1994, HUD issued interim instructions to field offices in July 1994, advising them to direct owners to use waiting lists, annually certify household incomes, and not exclude section 8 voucher and certificate holders. Also, according to the officials, HUD will incorporate these specific requirements into new use agreements that the agency will develop to reflect the rent-restriction provisions of the Multifamily Housing Property Disposition Reform Act of 1994. The officials said that the revised use agreements should be completed after the regulations implementing the 1994 act are finalized. In its comments on our draft report, HUD said that new use agreement riders would be ready for field offices’ use in sales contracts by April 1, 1995. HUD’s inconsistent guidance has led to different requirements being used for owners of properties with rent-restricted units. In a review of 32 properties sold with rent restrictions from February 1993 through June 30, 1994, we found an equal split between properties with the more specific use agreements issued in 1991 and properties with the more general use agreements issued in 1992 and 1993. In six instances, however, the responsible field office had used the more specific 1991 use agreements during 1994, well after they had been replaced by the more general agreements in June 1993. We also found that several field offices were continuing to actively discourage purchasers from counting certificate and voucher holders toward satisfying rent-restriction requirements even after the OGC, in September 1993, advised them of section 204 and its applicability. HUD property disposition officials told us that they intended to give field offices flexibility to modify the 1993 use agreements on the basis of local preferences, but that field offices should not be discouraging voucher and certificate holders from occupying rent-restricted units. The three properties we visited illustrate how HUD’s waiting list requirements can influence the extent to which a property owner actually experiences reduced rental income because of rent-restricted units. Two of these properties were formerly HUD-owned and, therefore, were sold under the more general use agreements, without requirements for filling vacancies from waiting lists on a first-come, first-served basis. The third property was sold with a 1991 use agreement that specifically required use of a waiting list. On-site managers at the two properties sold with the 1992 use agreement told us that they did not accept tenants in rent-restricted units unless the households also had a section 8 voucher or certificate or unless 30 percent of their adjusted income (i.e., what the tenant would have to pay) equalled the full rent. Households that did not have certificates or vouchers or that did not have the necessary income to pay the full rent were turned away. In contrast, the third property was using a waiting list to fill unoccupied units. This particular 280-unit property had 55 rent-restricted units. Because the waiting list provided a systematic selection process, applicants were selected on a first-come, first-served basis. None of the 55 households residing in the rent-restricted units had vouchers or certificates or sufficiently high incomes; therefore, the owner was receiving less than the full rent on each of the units. According to data provided by the on-site management company, the property was receiving an average of $357 less than the full monthly rental income for each of the rent-restricted units. Until recently, HUD headquarters’ and field offices’ actions to oversee compliance with the rent-restriction agreements were limited. However, in July 1994, HUD directed its field offices to review compliance at a number of selected properties. The field offices found that 2 of the 16 properties they reviewed had not fully complied with their rent-restriction agreements. The property owners disagreed, and HUD was reviewing the cases as of November 1994. HUD did not issue instructions to its field offices for monitoring compliance with rent-restriction agreements until we discussed the matter with its property disposition officials in June 1994. The officials told us that they had not required field offices to monitor purchasers’ compliance with rent-restriction agreements because they considered this to be a low priority, given the relatively small number of properties that had been sold with rent restrictions. However, the officials agreed that some form of oversight was needed. HUD issued a memorandum in July 1994 that required field offices to perform a one-time on-site compliance review at each property having more than 20 rent-restricted units. The agency also provided general guidelines for monitoring compliance and a checklist to use during the review. The memorandum also stated that HUD was considering various alternatives and would later provide instructions for the long-term monitoring of projects to ensure that they remain in compliance with the terms and conditions of the use agreements under which they were sold. According to HUD officials, these instructions were to be prepared after the field offices completed the initial compliance reviews. Field offices were directed to complete their compliance reviews by August 15, 1994. However, because the July 1994 memorandum did not require the field offices to formally report the results of the reviews to HUD headquarters, a second memorandum was issued in September 1994 that extended the time for completing and reporting on the reviews until October 1994. The results of the compliance reviews were reported to HUD headquarters in October 1994. In all, 25 properties containing a total of 949 rent-restricted units met the criteria to be reviewed (i.e., they contained 20 or more rent-restricted units). However, reviews at 9 of the 25 properties were postponed for several months because the properties had only been recently sold and had not yet had time to fully implement their rent-restriction procedures. The field offices determined that 14 of the remaining 16 properties complied with the provisions of their use agreements and that 2 properties were not in compliance. As of November 1994, HUD was reviewing these two cases to determine what actions, if any, should be taken. HUD property disposition officials said that they were satisfied with the overall compliance found to date. HUD property disposition officials told us that the agency had planned to develop instructions to field offices for the long-term monitoring of owners’ compliance with rent-restriction agreements, but as of December 1994, they did not have a specific target date for issuing them. In commenting on our draft report, HUD said that it would issue revised monitoring procedures to its field offices by May 1, 1995. In April 1994, the Congress enacted the Multifamily Housing Property Disposition Reform Act (P.L. 103-233), which revised the procedures HUD may use to dispose of multifamily properties. Although rent-restriction agreements are likely to continue as an important aspect of HUD’s multifamily property disposition activities, future use of the current rent-restriction alternative is likely to decrease. The act authorizes HUD to use rent restrictions as a means of complying with a number of its requirements (such as ensuring that units in certain properties that do not receive project-based section 8 assistance remain available and affordable to low-income families). The act gives HUD broad discretionary authority to use rent restrictions and to discount sales prices in order to meet the act’s property disposition goals. The act also established an additional way to determine the maximum amount that occupants of rent-restricted units have to pay toward rent. Occupants can be required to pay a percentage of the median income in the local area, instead of a percentage of their household income. This could increase the amount that some households with low incomes pay toward rent. HUD officials told us that while the previously used rent-restriction agreements may still be used under the 1994 act, they believe that the need to use them in future sales may be limited. Instead, HUD is likely to use rent-restriction agreements that base tenants’ rent payments on a percentage of the area’s median income. The officials also noted that the need for the previous agreements will be diminished at least through fiscal year 1995 because larger amounts of section 8 funding have been appropriated (approximately $550 million in fiscal year 1995 compared with $93 million in fiscal year 1993). As proposed in our draft report, HUD recently established a firm schedule for prompt issuance of instructions implementing the new rent-restriction options that it plans to use in carrying out the 1994 legislation. In its comments on our draft report, HUD said that new use agreement riders reflecting the 1994 legislation would be available for use in sales contracts by April 1, 1995, and that its revised monitoring instructions, scheduled for issuance by May 1, 1995, would include revisions to reflect the 1994 legislation. HUD has not (1) provided its field offices nor purchasers of HUD multifamily properties with clear instructions on the procedures owners must follow in managing properties subject to rent restrictions or (2) established long-term requirements specifying how field offices should oversee owners’ compliance with agreed-upon use restrictions. As a result, HUD has placed inconsistent requirements on property owners and, until recently, had not required field offices to oversee owners’ compliance. HUD has acknowledged that it did not provide field offices and property owners adequate instructions when the rent-restriction approach was implemented. Although HUD had planned to clarify property management requirements and issue instructions to field offices for the long-term monitoring of properties with rent-restriction agreements, it did not have a definite time frame for completing these actions. However, in response to our draft report, HUD said that it would have revised use agreement riders, which detail purchasers’ obligations for meeting rent-restriction requirements, ready for field offices to use in sales contracts by April 1, 1995. HUD also said that it would issue revised monitoring procedures to its field offices by May 1, 1995. According to HUD officials, the agency will require owners to maintain waiting lists and to fill vacancies from the lists on a first-come, first-served basis. This requirement should increase the availability of future rent-restricted units to households that are not already receiving federal rent assistance by preventing owners from purposely filling vacancies exclusively with holders of section 8 vouchers and certificates. While it is unclear to what extent the previously used rent-restriction agreements will be used in the future, rent restrictions will be a key tool for HUD to use in meeting the requirements of new property disposition legislation enacted in April 1994. HUD plans to soon have available new use agreement riders and monitoring instructions that reflect the additional rent-restriction options it will use in implementing the 1994 act. As was the case with previous rent restrictions, the effectiveness of future restrictions will depend, in part, on how effectively the new riders communicate the procedures owners must follow in managing rent-restricted properties and on the adequacy of the new monitoring instructions. In its comments, HUD said that we correctly pointed out the problems it had experienced in developing procedures to implement the rent-restriction approach but noted that there have been relatively few properties and units sold with rent restrictions. Through its comments, HUD implemented the recommendations that we proposed by establishing a firm schedule for (1) clarifying procedures that owners must follow in managing rent-restricted units, (2) clarifying procedures field offices are to use in monitoring owners’ compliance, and (3) establishing similar procedures for new rent-restriction options that the agency will use to carry out requirements of the Multifamily Housing Property Disposition Reform Act of 1994. Accordingly, this report makes no recommendations, and it has been revised to reflect HUD’s additional actions. We plan to monitor HUD’s issuance of the revised procedures and ensure that the revisions adequately address the problems that we found. (See app. I for the complete text of HUD’s comments.) To evaluate HUD’s instructions and compliance monitoring, we reviewed applicable laws, regulations, and procedures concerning the rent-restriction approach and analyzed information and data provided by HUD on properties sold with rent restrictions through December 31, 1994. We discussed the implementation of the rent-restriction approach with officials from the Office of Preservation and Disposition and the Office of General Counsel at HUD headquarters in Washington, D.C., and with corresponding officials at field offices in Denver, Colorado; Jacksonville, Florida; Atlanta, Georgia; Kansas City, Kansas; St. Louis, Missouri; Greensboro, North Carolina; and Fort Worth and Houston, Texas. Through June 30, 1994, these eight field offices were responsible for selling about 60 percent of the rent-restricted properties. We also visited three properties that were sold with rent restrictions, obtained and analyzed information on their rent-restriction procedures, and interviewed property owners and on-site staff. To determine the expected future use of rent restrictions, we (1) reviewed the provisions of the Multifamily Housing Property Disposition Reform Act of 1994, (2) determined what changes the act makes in HUD’s authority for establishing rent restrictions, and (3) discussed with property disposition officials HUD’s plans for implementing the act. We conducted our review from May through December 1994 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce it contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of Housing and Urban Development. We will also make copies available to others on request. Please contact me at (202) 512-7631 if you or your staff have any questions. Major contributors to this report are listed in appendix II. John T. McGrail The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Department of Housing and Urban Development's (HUD) procedures for implementing a rent-restriction alternative for the disposition of multifamily properties, focusing on: (1) HUD instructions to its field offices and property purchasers on implementing the alternative; (2) HUD instructions to field offices on monitoring purchasers' compliance with rent-restriction agreements; and (3) the expected future use of the rent-restriction alternative. GAO found that: (1) HUD has not provided adequate instructions on how the rent-restriction alternative should be implemented; (2) HUD has inconsistently enforced its policy of filling vacant units on a first-come, first-served basis, which ensures that new property owners accept low-income households regardless of how much rental income the owners receive; (3) HUD did not require its field offices to monitor property owners' compliance with rent-restriction agreements until July 1994, since it placed a low priority on establishing monitoring requirements because few properties were sold with rent restrictions; (4) HUD plans to issue instructions clarifying program requirements by May 1, 1995; (5) changes authorized by new property disposition legislation are likely to diminish HUD use of the current rent-restriction alternative; and (6) many occupants of rent-restricted units may be required to pay rents computed as a percentage of the area median income rather than as 30 percent of their own adjusted household income.
Created in 1961, Peace Corps is mandated by statute to help meet developing countries’ needs for trained manpower while promoting mutual understanding between Americans and other peoples. Volunteers commit to 2-year assignments in host communities, where they work on projects such as teaching English, strengthening farmer cooperatives, or building sanitation systems. By developing relationships with members of the communities in which they live and work, volunteers contribute to greater intercultural understanding between Americans and host country nationals. Volunteers are expected to maintain a standard of living similar to that of their host community colleagues and co-workers. They are provided with stipends that are based on local living costs and housing similar to their hosts. Volunteers are not supplied with vehicles. Although the Peace Corps accepts older volunteers and has made a conscious effort to recruit minorities, the current volunteer population has a median age of 25 years and is 85 percent white. More than 60 percent of the volunteers are women. Volunteer health, safety, and security is Peace Corps’ highest priority, according to the agency. To address this commitment, the agency has adopted policies for monitoring and disseminating information on the security environments in which the agency operates, training volunteers, developing safe and secure volunteer housing and work sites, monitoring volunteers, and planning for emergencies such as evacuations. Headquarters is responsible for providing guidance, supervision, and oversight to ensure that agency policies are implemented effectively. Peace Corps relies heavily on country directors—the heads of agency posts in foreign capitals—to develop and implement practices that are appropriate for specific countries. Country directors, in turn, rely on program managers to develop and oversee volunteer programs. Volunteers are expected to follow agency policies and exercise some responsibility for their own safety and security. Peace Corps emphasizes community acceptance as the key to maintaining volunteer safety and security. The agency has found that volunteer safety is best ensured when volunteers are well integrated into their host communities and treated as extended family and contributors to development. Reported incidence rates of crime against volunteers have remained essentially unchanged since we completed our report in 2002. Reported incidence rates for most types of assaults have increased since Peace Corps began collecting data in 1990, but have stabilized in recent years. The reported incidence rate for major physical assaults has nearly doubled, averaging about 9 assaults per 1,000 volunteer years in 1991-1993 and averaging about 17 assaults in 1998-2000. Reported incidence rates for major assaults remained unchanged over the next 2 years. Reported incidence rates of major sexual assaults have decreased slightly, averaging about 10 per 1,000 female volunteer years in 1991-1993 and about 8 per 1,000 female volunteer years in 1998-2000. Reported incidence rates for major sexual assaults averaged about 9 per 1,000 female volunteer years in 2001 -2002. Peace Corps’ system for gathering and analyzing data on crime against volunteers has produced useful insights, but we reported in 2002 that steps could be taken to enhance the system. Peace Corps officials agreed that reported increases are difficult to interpret; the data could reflect actual increases in assaults, better efforts to ensure that agency staff report all assaults, and/or an increased willingness among volunteers to report incidents. The full extent of crime against volunteers, however, is unknown because of significant underreporting. Through its volunteer satisfaction surveys, Peace Corps is aware that a significant number of volunteers do not report incidents, thus reducing the agency’s ability to state crime rates with certainty. For example, according to the agency’s 1998 survey, volunteers did not report 60 percent of rapes and 20 percent of nonrape sexual assaults. Reasons cited for not reporting include embarrassment, fear of repercussions, confidentiality concerns, and a belief that Peace Corps could not help. In 2002, we observed that opportunities for additional analyses existed that could help Peace Corps develop better-informed intervention and prevention strategies. For example, our analysis showed that about a third of reported assaults after 1993 occurred from the fourth to the eighth month of service—shortly after volunteers completed training, arrived at sites, and began their jobs. We observed that this finding could be explored further and used to develop additional training. Since we issued our report, Peace Corps has taken steps to strengthen its efforts for gathering and analyzing crime data. The agency has hired an analyst responsible for maintaining the agency’s crime data collection system, analyzing the information collected, and publishing the results for the purpose of influencing volunteer safety and security policies. Since joining the agency a year ago, the analyst has focused on redesigning the agency’s incident reporting form to provide better information on victims, assailants, and incidents and preparing a new data management system that will ease access to and analysis of crime information. However, these new systems have not yet been put into operation. The analyst stated that the reporting protocol and data management system are to be introduced this summer, and responsibility for crime data collection and analysis will be transferred from the medical office to the safety and security office. According to the analyst, she has not yet performed any new data analyses because her focus to date has been on upgrading the system. We reported that Peace Corps’ headquarters had developed a safety and security framework but that the field’s implementation of this framework was uneven. The agency has taken steps to improve the field’s compliance with the framework, but recent Inspector General reports indicate that this has not been uniformly achieved. We previously reported that volunteers were generally satisfied with the agency’s training programs. However, some volunteers had housing that did not meet the agency’s standards, there was great variation in the frequency of staff contact with volunteers, and posts had emergency action plans with shortcomings. To increase the field’s compliance with the framework, in 2002, the agency hired a compliance officer at headquarters, increased the number of field- based safety and security officer positions, and created a safety and security position at each post. However, recent Inspector General reports continued to find significant shortcomings at some posts, including difficulties in developing safe and secure sites and preparing adequate emergency action plans. In 2002, we found that volunteers were generally satisfied with the safety training that the agency provided, but we found a number of instances of uneven performance in developing safe and secure housing. Posts have considerable latitude in the design of their safety training programs, but all provide volunteers with 3 months of preservice training that includes information on safety and security. Posts also provide periodic in-service training sessions that cover technical issues. Many of the volunteers we interviewed said that the safety training they received before they began service was useful and cited testimonials by current volunteers as one of the more valuable instructional methods. In both the 1998 and 1999 volunteer satisfaction surveys, over 90 percent of volunteers rated safety and security training as adequate or better; only about 5 percent said that the training was not effective. Some regional safety and security officer reports have found that improvements were needed in post training practices. The Inspector General has reported that volunteers at some posts said cross-cultural training and presentations by the U.S. embassy’s security officer did not prepare them adequately for safety-related challenges they faced during service. Some volunteers stated that Peace Corps did not fully prepare them for the racial and sexual harassment they experienced during their service. Some female volunteers at posts we visited stated that they would like to receive self-protection training. Peace Corps’ policies call for posts to ensure that housing is inspected and meets post safety and security criteria before the volunteers arrive to take up residence. Nonetheless, at each of the five posts we visited, we found instances of volunteers who began their service in housing that had not been inspected and had various shortcomings. For example, one volunteer spent her first 3 weeks at her site living in her counterpart’s office. She later found her own house; however, post staff had not inspected this house, even though she had lived in it for several months. Poorly defined work assignments and unsupportive counterparts may also increase volunteers’ risk by limiting their ability to build a support network in their host communities. At the posts we visited, we met volunteers whose counterparts had no plans for the volunteers when they arrived at their sites, and only after several months and much frustration did the volunteers find productive activities. We found variations in the frequency of staff contact with volunteers, although many of the volunteers at the posts we visited said they were satisfied with the frequency of staff visits to their sites, and a 1998 volunteer satisfaction survey reported that about two-thirds of volunteers said the frequency of visits was adequate or better. However, volunteers had mixed views about Peace Corps’ responsiveness to safety and security concerns and criminal incidents. The few volunteers we spoke with who said they were victims of assault expressed satisfaction with staff response when they reported the incidents. However, at four of the five posts we visited, some volunteers described instances in which staff were unsupportive when the volunteers reported safety concerns. For example, one volunteer said she informed Peace Corps several times that she needed a new housing arrangement because her doorman repeatedly locked her in or out of her dormitory. The volunteer said staff were unresponsive, and she had to find new housing without the Peace Corps’ assistance. In 2002, we reported that, while all posts had tested their emergency action plan, many of the plans had shortcomings, and tests of the plans varied in quality and comprehensiveness. Posts must be well prepared in case an evacuation becomes necessary. In fact, evacuating volunteers from posts is not an uncommon event. In the last two years Peace Corps has conducted six country evacuations involving nearly 600 volunteers. We also reported that many posts did not include all expected elements of a plan, such as maps demarcating volunteer assembly points and alternate transportation plans. In fact, none of the plans contained all of the dimensions listed in the agency’s Emergency Action Plan checklist, and many lacked key information. In addition, we found that in 2002 Peace Corps had not defined the criteria for a successful test of a post plan. Peace Corps has initiated a number of efforts to improve the field’s implementation of its safety and security framework, but Inspector General reports continued to find significant shortcomings at some posts. However, there has been improvement in post communications with volunteers during emergency action plan tests. We reviewed 10 Inspector General reports conducted during 2002 and 2003. Some of these reports were generally positive—one congratulated a post for operating an “excellent” program and maintaining high volunteer morale. However, a variety of weaknesses were also identified. For example, the Inspector General found multiple safety and security weaknesses at one post, including incoherent project plans and a failure to regularly monitor volunteer housing. The Inspector General also reported that several posts employed inadequate site development procedures; some volunteers did not have meaningful work assignments, and their counterparts were not prepared for their arrival at site. In response to a recommendation from a prior Inspector General report, one post had prepared a plan to provide staff with rape response training and identify a local lawyer to advise the post of legal procedures in case a volunteer was raped. However, the post had not implemented these plans and was unprepared when a rape actually occurred. Our review of recent Inspector General reports identified emergency action planning weaknesses at some posts. For example, the Inspector General found that at one post over half of first year volunteers did not know the location of their emergency assembly points. However, we analyzed the results of the most recent tests of post emergency action plans and found improvement since our last report. About 40 percent of posts reported contacting almost all volunteers within 24 hours, compared with 33 percent in 2001. Also, our analysis showed improvement in the quality of information forwarded to headquarters. Less than 10 percent of the emergency action plans did not contain information on the time it took to contact volunteers, compared with 40 percent in 2001. In our 2002 report, we identified a number of factors that hampered Peace Corps efforts to ensure that this framework produced high-quality performance for the agency as a whole. These included high staff turnover, uneven application of supervision and oversight mechanisms, and unclear guidance. We also noted that Peace Corps had identified a number of initiatives that could, if effectively implemented, help to address these factors. The agency has made some progress but has not completed implementation of these initiatives. High staff turnover hindered high quality performance for the agency. According to a June 2001 Peace Corps workforce analysis, turnover among U.S. direct hires was extremely high, ranging from 25 percent to 37 percent in recent years. This report found that the average tenure of these employees was 2 years, that the agency spent an inordinate amount of time selecting and orienting new employees, and that frequent turnover produced a situation in which agency staff are continually “reinventing the wheel.” Much of the problem was attributed to the 5-year employment rule, which statutorily restricts the tenure of U.S. direct hires, including regional directors, country desk officers, country directors and assistant country directors, and Inspector General and safety and security staff. Several Peace Corps officials stated that turnover affected the agency’s ability to maintain continuity in oversight of post operations. In 2002, we also found that informal supervisory mechanisms and a limited number of staff hampered Peace Corps efforts to ensure even application of supervision and oversight. The agency had some formal mechanisms for documenting and assessing post practices, including the annual evaluation and testing of post emergency action plans and regional safety and security officer reports on post practices. Nonetheless, regional directors and country directors relied primarily on informal supervisory mechanisms, such as staff meetings, conversations with volunteers, and e-mail to ensure that staff were doing an adequate job of implementing the safety and security framework. One country director observed that it was difficult to oversee program managers’ site development or monitoring activities because the post did not have a formal system for performing these tasks. We also reported that Peace Corps’ capacity to monitor and provide feedback to posts on their safety and security performance was limited by the small number of staff available to perform relevant tasks. We noted that the agency had hired three field-based security and safety specialists to examine and help improve post practices, and that the Inspector General also played an important role in helping posts implement the agency’s safety and security framework. However, we reported that between October 2000 and May 2002 the safety and security specialists had been able to provide input to only about one-third of Peace Corps’ posts while the Inspector General had issued findings on safety and security practices at only 12 posts over 2 years. In addition, we noted that Peace Corps had no system for tracking post compliance with Inspector General recommendations. We reported that the agency’s guidance was not always clear. The agency’s safety and security framework outlines requirements that posts are expected to comply with but did not often specify required activities, documentation, or criteria for judging actual practices—making it difficult for staff to understand what was expected of them. Many posts had not developed clear reporting and response procedures for incidents such as responding to sexual harassment. The agency’s coordinator for volunteer safety and security stated that unclear procedures made it difficult for senior staff, including regional directors, to establish a basis for judging the quality of post practices. The coordinator also observed that, at some posts, field-based safety and security officers had found that staff members did not understand what had to be done to ensure compliance with agency policies. The agency has taken steps to reduce staff turnover, improve supervision and oversight mechanisms, and clarify its guidance. In February 2003, Congress passed a law to allow U.S. direct hires whose assignments involve the safety of Peace Corps volunteers to serve for more than 5 years. The Peace Corps Director has employed his authority under this law to designate 23 positions as exempt from the 5-year rule. These positions include nine field-based safety and security officers, the three regional safety and security desk officers working at agency headquarters, as well as the crime data analyst and other staff in the headquarters office of safety and security. They do not include the associate director for safety and security, the compliance officer, or staff from the office of the Inspector General. Peace Corps officials stated that they are about to hire a consultant who will conduct a study to provide recommendations about adding additional positions to the current list. To strengthen supervision and oversight, Peace Corps has increased the number of staff tasked with safety and security responsibilities and created the office of safety and security that centralizes all security- related activities under the direction of a newly created associate directorate for safety and security. The agency’s new crime data analyst is a part of this directorate. In addition, Peace Corps has appointed six additional field-based safety and security officers, bringing the number of such individuals on duty to nine (with three more positions to be added by the end of 2004); authorized each post to appoint a safety and security coordinator to provide a point of contact for the field-based safety and security officers and to assist country directors in ensuring their post’s compliance with agency policies, including policies pertaining to monitoring volunteers and responding to their safety and security concerns (all but one post have filled this position); appointed safety and security desk officers in each of Peace Corps’ three regional directorates in Washington, D.C., to monitor post compliance in conjunction with each region’s country desk officers; and appointed a compliance officer, reporting to the Peace Corps Director, to independently examine post practices and to follow up on Inspector General recommendations on safety and security. In response to our recommendation that Peace Corps’ Director develop indicators to assess the effectiveness of the new initiatives and include these in the agency’s annual Government Performance and Results Act reports, Peace Corps has expanded its reports to include 10 quantifiable indicators of safety and security performance. To clarify agency guidance, Peace Corps has created a “compliance tool” or checklist that provides a fairly detailed and explicit framework for headquarters staff to employ in monitoring post efforts to put Peace Corps’ safety and security guidance into practice in their countries, strengthened guidance on volunteer site selection and development, developed standard operating procedures for post emergency action plans, concluded a protocol clarifying that the Inspector General’s staff has responsibility for coordinating the agency’s response to crimes against volunteers. These efforts have enhanced Peace Corps’ ability to improve safety and security practices in the field. The threefold expansion in the field-based safety and security officer staff has increased the agency’s capacity to support posts in developing and applying effective safety and security policies. Regional safety and security officers at headquarters and the agency’s compliance officer monitor the quality of post practices. All posts were required to certify that they were in compliance with agency expectations by the end of June 2003. Since that time, a quarterly reporting system has gone into effect wherein posts communicate with regional headquarters regarding the status of their safety and security systems and practices. The country desks and the regional safety and security officers, along with the compliance officer, have been reviewing the emergency action plans of the posts and providing them with feedback and suggestions for improvement. The compliance officer has created and is applying a matrix to track post performance in addressing issues deriving from a variety of sources, including application of the agency’s safety and security compliance tool and Inspector General reports. The compliance officer and staff from one regional office described their efforts, along with field- based safety and security staff and program experts from headquarters, to ensure an adequate response from one post where the Inspector General had found multiple safety and security weaknesses. However, efforts to put the new system in place are incomplete. As already noted, the agency has developed, but not yet introduced, an improved system for collecting and analyzing crime data. The new associate director of safety and security observes that the agency’s field-based safety and security officers come from diverse backgrounds and that some have been in their positions for only a few months. All have received training via the State Department’s bureau of diplomatic security. However, they are still employing different approaches to their work. Peace Corps is preparing guidance for these officers that would provide them with a uniform approach to conducting their work and reporting the results of their analyses, but the guidance is still in draft form. The Compliance Officer has completed detailed guidance for crafting emergency action plans, but this guidance was distributed to the field only at the beginning of this month. Moreover, following up on our 2002 recommendation, the agency’s Deputy Director is heading up an initiative to revise and strengthen the indicators that the agency uses to judge the quality of all aspects of its operations, including ensuring volunteer safety and security, under the Government Performance and Results Act. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information regarding this statement, please contact Phyllis Anderson, Assistant Director, International Affairs and Trade, at (202) 512-7364 or [email protected]. Individuals making key contributions to this statement were Michael McAtee, Suzanne Dove, Christina Werth, Richard Riskie, Bruce Kutnick, Lynn Cothern, and Martin de Alteriis. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
About 7,500 Peace Corps volunteers currently serve in 70 countries. The administration intends to increase this number to about 14,000. Volunteers often live in areas with limited access to reliable communications, police, or medical services. As Americans, they may be viewed as relatively wealthy and, hence, good targets for crime. In this testimony, GAO summarizes findings from its 2002 report Peace Corps: Initiatives for Addressing Safety and Security Challenges Hold Promise, but Progress Should be Assessed, GAO- 02-818, on (1) trends in crime against volunteers and Peace Corps' system for generating information, (2) the agency's field implementation of its safety and security framework, and (3) the underlying factors contributing to the quality of these practices. The full extent of crime against Peace Corps volunteers is unclear due to significant under-reporting. However, Peace Corps' reported rates for most types of assaults have increased since the agency began collecting data in 1990. The agency's data analysis has produced useful insights, but additional analyses could help improve anti-crime strategies. Peace Corps has hired an analyst to enhance data collection and analysis to help the agency develop better-informed intervention and prevention strategies. In 2002, we reported that Peace Corps had developed safety and security policies but that efforts to implement these policies in the field had produced varying results. Some posts complied, but others fell short. Volunteers were generally satisfied with training. However, some housing did not meet standards and, while all posts had prepared and tested emergency action plans, many plans had shortcomings. Evidence suggests that agency initiatives have not yet eliminated this unevenness. The inspector general continues to find shortcomings at some posts. However, recent emergency action plan tests show an improved ability to contact volunteers in a timely manner. In 2002, we found that uneven supervision and oversight, staff turnover, and unclear guidance hindered efforts to ensure quality practices. The agency has taken action to address these problems. To strengthen supervision and oversight, it established an office of safety and security, supported by three senior staff at headquarters, nine field-based safety and security officers, and a compliance officer. In response to our recommendations, Peace Corps was granted authority to exempt 23 safety and security positions from the "5- year rule"--a statutory restriction on tenure. It also adopted a framework for monitoring post compliance and quantifiable performance indicators. However, the agency is still clarifying guidance, revising indicators, and establishing a performance baseline.
The Schedules of Federal Debt including the accompanying notes present fairly, in all material respects, in conformity with U.S. generally accepted accounting principles, the balances as of September 30, 2007, 2006, and 2005 for Federal Debt Managed by BPD; the related Accrued Interest Payables and Net Unamortized Premiums and Discounts; and the related increases and decreases for the fiscal years ended September 30, 2007 and 2006. BPD maintained, in all material respects, effective internal control relevant to the Schedule of Federal Debt related to financial reporting and compliance with applicable laws and regulations as of September 30, 2007, that provided reasonable assurance that misstatements, losses, or noncompliance material in relation to the Schedule of Federal Debt would be prevented or detected on a timely basis. Our opinion is based on criteria established under 31 U.S.C. § 3512 (c), (d), the Federal Managers’ Financial Integrity Act, and the Office of Management and Budget (OMB) Circular A- 123, Management’s Responsibility for Internal Control. We found matters involving information security controls that we do not consider to be significant deficiencies. We will communicate these matters to BPD's management, along with our recommendations for improvement, in a separate letter to be issued at a later date. Our tests for compliance in fiscal year 2007 with the statutory debt limit disclosed no instances of noncompliance that would be reportable under U.S. generally accepted government auditing standards or applicable OMB audit guidance. However, the objective of our audit of the Schedule of Federal Debt for the fiscal year ended September 30, 2007, was not to provide an opinion on overall compliance with laws and regulations. Accordingly, we do not express such an opinion. BPD’s Overview on Federal Debt Managed by the Bureau of the Public Debt contains information, some of which is not directly related to the Schedules of Federal Debt. We do not express an opinion on this information. However, we compared this information for consistency with the schedules and discussed the methods of measurement and presentation with BPD officials. Based on this limited work, we found no material inconsistencies with the schedules or U.S. generally accepted accounting principles. Management is responsible for (1) preparing the Schedules of Federal Debt in conformity with U.S. generally accepted accounting principles; (2) establishing, maintaining, and assessing internal control to provide reasonable assurance that the broad control objectives of the Federal Managers’ Financial Integrity Act are met; and (3) complying with applicable laws and regulations. We are responsible for obtaining reasonable assurance about whether (1) the Schedules of Federal Debt are presented fairly, in all material respects, in conformity with U.S. generally accepted accounting principles and (2) management maintained effective relevant internal control as of September 30, 2007, the objectives of which are the following: Financial reporting: Transactions are properly recorded, processed, and summarized to permit the preparation of the Schedule of Federal Debt for the fiscal year ended September 30, 2007, in conformity with U.S. generally accepted accounting principles. Compliance with laws and regulations: Transactions related to the Schedule of Federal Debt for the fiscal year ended September 30, 2007, are executed in accordance with laws governing the use of budget authority and with other laws and regulations that could have a direct and material effect on the Schedule of Federal Debt. We are also responsible for (1) testing compliance with selected provisions of laws and regulations that have a direct and material effect on the Schedule of Federal Debt and (2) performing limited procedures with respect to certain other information appearing with the Schedules of Federal Debt. In order to fulfill these responsibilities, we examined, on a test basis, evidence supporting the amounts and disclosures in the Schedules of Federal Debt; assessed the accounting principles used and any significant estimates evaluated the overall presentation of the Schedules of Federal Debt; obtained an understanding of the entity and its operations, including its internal control relevant to the Schedule of Federal Debt as of September 30, 2007, related to financial reporting and compliance with laws and regulations (including execution of transactions in accordance with budget authority); tested relevant internal controls over financial reporting and compliance, and evaluated the design and operating effectiveness of internal control relevant to the Schedule of Federal Debt as of September 30, 2007; considered the process for evaluating and reporting on internal control and financial management systems under the Federal Managers’ Financial Integrity Act; and tested compliance in fiscal year 2007 with the statutory debt limit (31 U.S.C. § 3101(b) (Supp IV 2005), as amended by Pub. L. No. 109-182, 120 Stat. 289 (2006), and Pub L. No. 110-91, 121 Stat. 988 (2007)). We did not evaluate all internal controls relevant to operating objectives as broadly defined by the Federal Managers' Financial Integrity Act, such as those controls relevant to preparing statistical reports and ensuring efficient operations. We limited our internal control testing to controls over financial reporting and compliance. Because of inherent limitations in internal control, misstatements due to error or fraud, losses, or noncompliance may nevertheless occur and not be detected. We also caution that projecting our evaluation to future periods is subject to the risk that controls may become inadequate because of changes in conditions or that the degree of compliance with controls may deteriorate. We did not test compliance with all laws and regulations applicable to BPD. We limited our tests of compliance to a selected provision of law that has a direct and material effect on the Schedule of Federal Debt for the fiscal year ended September 30, 2007. We caution that noncompliance may occur and not be detected by these tests and that such testing may not be sufficient for other purposes. We performed our work in accordance with U.S. generally accepted government auditing standards and applicable OMB audit guidance. In commenting on a draft of this report, BPD concurred with the conclusions in our report. The comments are reprinted in appendix I. Federal debt managed by the Bureau of the Public Debt (BPD) comprises debt held by the public and debt held by certain federal government accounts, the latter of which is referred to as intragovernmental debt holdings. As of September 30, 2007 and 2006, outstanding gross federal debt managed by the bureau totaled $8,993 and $8,493 billion, respectively. The increase in gross federal debt of $500 billion during fiscal year 2007 was due to an increase in gross intragovernmental debt holdings of $294 billion and an increase in gross debt held by the public of $206 billion. As Figure 1 illustrates, both intragovernmental debt holdings and debt held by the public have steadily increased since fiscal year 2003. The primary reason for the increases in intragovernmental debt holdings is the annual surpluses in the Federal Old-Age and Survivors Insurance Trust Fund, Civil Service Retirement and Disability Fund, Federal Hospital Insurance Trust Fund, Federal Disability Insurance Trust Fund, and Military Retirement Fund. The increases in debt held by the public are due primarily to total federal spending exceeding total federal revenues. As of September 30, 2007, gross debt held by the public totaled $5,049 billion and gross intragovernmental debt holdings totaled $3,944 billion. Total Gross Federal Debt Outstanding (in billions) Interest expense incurred during fiscal year 2007 consists of (1) interest accrued and paid on debt held by the public or credited to accounts holding intragovernmental debt during the fiscal year, (2) interest accrued during the fiscal year, but not yet paid on debt held by the public or credited to accounts holding intragovernmental debt, and (3) net amortization of premiums and discounts. The primary components of interest expense are interest paid on the debt held by the public and interest credited to federal government trust funds and other federal government accounts that hold Treasury securities. The interest paid on the debt held by the public affects the current spending of the federal government and represents the burden in servicing its debt (i.e., payments to outside creditors). Interest credited to federal government trust funds and other federal government accounts, on the other hand, does not result in an immediate outlay of the federal government because one part of the government pays the interest and another part receives it. However, this interest represents a claim on future budgetary resources and hence an obligation on future taxpayers. This interest, when reinvested by the trust funds and other federal government accounts, is included in the programs’ excess funds not currently needed in operations, which are invested in federal securities. During fiscal year 2007, interest expense incurred totaled $433 billion, interest expense on debt held by the public was $239 billion, and $194 billion was interest incurred for intragovernmental debt holdings. As Figure 2 illustrates, total interest expense has increased in fiscal years 2003 through 2007. (in billions) Debt held by the public reflects how much of the nation’s wealth has been absorbed by the federal government to finance prior federal spending in excess of total federal revenues. As of September 30, 2007 and 2006, gross debt held by the public totaled $5,049 billion and $4,843 billion, respectively (see Figure 1), an increase of $206 billion. The borrowings and repayments of debt held by the public increased from fiscal year 2006 to 2007. After Treasury took into account the increased issuances of State and Local Government Series securities, Treasury decided to finance the remaining current operations using more short-term securities. Callable securities mature between fiscal years 2013 and 2015, but are reported by their call date. Debt Held by the Public, cont. The government also issues to the public, state and local governments, and foreign governments and central banks nonmarketable securities, which cannot be resold, and have maturity dates from on demand to more than 10 years. As of September 30, 2007, nonmarketable securities totaled $621 billion, or 12 percent of debt held by the public. As of that date, nonmarketable securities primarily consisted of savings securities totaling $197 billion and special securities for state and local governments totaling $297 billion. The Federal Reserve Banks (FRBs) act as fiscal agents for Treasury, as permitted by the Federal Reserve Act. As fiscal agents for Treasury, the FRBs play a significant role in the processing of marketable book-entry securities and paper U.S. savings bonds. For marketable book-entry securities, selected FRBs receive bids, issue book-entry securities to awarded bidders and collect payment on behalf of Treasury, and make interest and redemption payments from Treasury’s account to the accounts of security holders. For paper U.S. savings bonds, selected FRBs sell, print, and deliver savings bonds; redeem savings bonds; and handle the related transfers of cash. Callable securities mature between fiscal years 2012 and 2015, but are reported by their call date. Intragovernmental debt holdings represent balances of Treasury securities held by over 230 individual federal government accounts with either the authority or the requirement to invest excess receipts in special U.S. Treasury securities that are guaranteed for principal and interest by the full faith and credit of the U.S. Government. Intragovernmental debt holdings primarily consist of balances in the Social Security, Medicare, Military Retirement, and Civil Service Retirement and Disability trust funds. As of September 30, 2007, such funds accounted for $3,419 billion, or 87 percent, of the $3,944 billion intragovernmental debt holdings balances (see Figure 4). As of September 30, 2007 and 2006, gross intragovernmental debt holdings totaled $3,944 billion and $3,650 billion, respectively (see Figure 1), an increase of $294 The majority of intragovernmental debt holdings are Government Account Series (GAS) securities. GAS securities consist of par value securities and market-based securities, with terms ranging from on demand out to 30 years. Par value securities are issued and redeemed at par (100 percent of the face value), regardless of current market conditions. Market-based securities, however, can be issued at a premium or discount and are redeemed at par value on the maturity date or at market value if redeemed before the maturity date. Com ponents of Intragovernm ental Debt Holdings as of Septem ber 30, 2007 The Social Security trust funds consist of the Federal Old-Age and Survivors Insurance Trust Fund and the Federal Disability Insurance Trust Fund. In addition, the Medicare trust funds are made up of the Federal Hospital Insurance Trust Fund and the Federal Supplementary Medical Insurance Trust Fund. On May 17, 2007, a house bill was introduced and approved to increase the debt limit from $8,965 billion to $9,815 billion. The bill was then referred to the Senate Committee on Finance on May 21, 2007, where it gained approval on September 12, 2007. Projections determined that the United States would hit the statutory debt limit on October 1, 2007, and consequently, the full senate passed this measure to raise the debt limit by $850 billion on September 27, 2007. On September 29, 2007, Public Law 110-91 was enacted, which raised the statutory debt ceiling to $9,815 billion. Thirty-Year Bond Issuance/Discontinuation of 3-Year Note The thirty-year bond was re-introduced in February 2006 with semi-annual issuance planned. In August 2006, Treasury announced that the 30-year bond would be issued on a quarterly basis beginning in February 2007. The February issue was reopened in May 2007, followed by an original issue in August 2007 that will be reopened in November 2007. This quarterly issuance pattern has benefited the Separate Trading of Registered Interest and Principal of Securities (STRIPS) market by creating interest payments for February, May, August and November. Beginning in February 2006, the auction and issuance of the monthly 5-year note was shifted to month end to accommodate the re-introduction of the 30-year bond. Additionally, Treasury’s ongoing monitoring of the fiscal year’s economic outlook has resulted in the discontinuance of the 3- year note. Discontinuance of the 3-year note will allow Treasury to ensure large liquid benchmark issuances, better balance its portfolio, and manage the fiscal outlook. The final scheduled auction of the 3-year note was held on May 7, 2007. Discontinuance of Long-Term Securities in Legacy Treasury Direct On January 18, 2007, a final amendment to the Uniform Offering Circular (UOC) was published in the Federal Register clarifying that the Treasury Department may announce certain marketable Treasury securities as not eligible for purchase or holding in Legacy Treasury Direct. Legacy Treasury Direct, which was implemented in 1986, will be phased out and replaced by the newer, online TreasuryDirect system. To assist with this phasing out, the offering of longer-term securities in Legacy Treasury Direct was discontinued. Since January 2007, 30-year bonds and 20-year TIPS are no longer available in Legacy Treasury Direct. This amendment also clarified that the announcement for each auction, in conjunction with the UOC, provides the terms and conditions for the sale and issuance of marketable Treasury bills, notes, bonds, and TIPS. TreasuryDirect is an Internet-accessed system that enables investors to purchase the full range of Treasury securities and manage their holdings in a single account. Sensitive online transactions such as bank account changes and securities sales and transfers could become vulnerable to fraud. In July 2007, BPD initiated certified paper requests to process these sensitive transactions. This third-party investor identification helps mitigate risk and assure individual investors of the security of their Treasury Direct investments by providing additional verification and a written record of transaction requests. Significant Events in FY 2007, cont. Postal Retiree Health Benefits Fund On December 20, 2006, the President signed H.R. 6407, which enacted Public Law 109-435, the “Postal Accountability and Enhancement Act.” This Act created a new Government Account Series Trust Fund, the Postal Retiree Health Benefits Fund. This fund is administered by the Office of Personnel Management and receives transfers from the United States Postal Service (USPS). The initial transfer in the amount of $3 billion was received and invested in par value securities on April 6, 2007. Additional amounts of $17.1 billion and $5.4 billion were transferred and invested on June 30, 2007 and September 28, 2007, respectively. The fund is not expected to make payouts until 2017. Beginning with the accounting date of June 1, 2007, BPD is publishing key daily debt-related financial data on our website, http://www.treasurydirect.gov/govt/reports/pd/feddebt/feddebt_daily.htm. Similar financial information is currently published monthly. During the past fiscal year, BPD strengthened internet communications with customers by redesigning the government section of the Treasurydirect.gov website. Additional on-line resources are now available and the overall functionality and accessibility features are greatly improved. The Schedules of Federal Debt daily reporting was implemented to support the Treasury strategic objective to “make accurate, timely financial information on U.S. Government programs readily available.” The enhanced financial reporting is geared toward providing our customers more timely information and is one of BPD’s strategic goals for FY 2007. Federal debt outstanding is one of the largest legally binding obligations of the federal government. Nearly all the federal debt has been issued by the Treasury with a small portion being issued by other federal government agencies. Treasury issues debt securities for two principal reasons, (1) to borrow needed funds to finance the current operations of the federal government and (2) to provide an investment and accounting mechanism for certain federal government accounts’ excess receipts, primarily trust funds. Total gross federal debt outstanding has dramatically increased over the past 25 years from $1,142 billion as of September 30, 1982, to $8,993 billion as of September 30, 2007 (see Figure 5). Large budget deficits emerged during the 1980’s due to tax policy decisions and increased outlays for defense and domestic programs. Through fiscal year 1997, annual federal deficits continued to be large and debt continued to grow at a rapid pace. As a result, total federal debt increased almost five fold between 1982 and 1997. By fiscal year 1998, federal debt held by the public was beginning to decline. In fiscal years 1998 through 2001, the amount of debt held by the public fell by $476 billion, from $3,815 billion to $3,339 billion. However, higher Federal outlays and tax policy decisions have resulted in an increase in debt held by the public from $3,339 billion in 2001 to $5,049 billion in 2007. Historical Perspective, cont. Even in those years where debt held by the public declined, total federal debt increased because of increases in intragovernmental debt holdings. Over the past 4 fiscal years, intragovernmental debt holdings increased by $1,085 billion, from $2,859 billion as of September 30, 2003, to $3,944 billion as of September 30, 2007. By law, trust funds have the authority or are required to invest surpluses in federal securities. As a result, the intragovernmental debt holdings balances primarily represent the cumulative surplus of funds due to the trust funds’ cumulative annual excess of tax receipts, interest credited, and other collections compared to spending. As shown in Figure 6, interest rates have fluctuated over the past 25 years. The average interest rates reflected here represent the original issue weighted effective yield on securities outstanding at the end of the fiscal year. Managed by the Bureau of the Public Debt For the Fiscal Years Ended September 30, 2007 and 2006 (Dollars in Millions) (Note 2) (Note 3) (Discounts) (Discounts) ($35,531) (48,568) (12,630) Accrued Interest (Note 4) (48,568) (12,630) Net Amortization (Note 4) (43,934) (43,934) (40,165) (1,159) (48,776) Accrued Interest (Note 4) (48,776) Net Amortization (Note 4) (49,500) (49,500) ($39,441) The accompanying notes are an integral part of these schedules. Notes to the Schedules of Federal Debt Managed by the Bureau of the Public Debt For the Fiscal Years Ended September 30, 2007 and 2006 (Dollars in Millions) Note 1. Significant Accounting Policies The Schedules of Federal Debt Managed by the Bureau of the Public Debt (BPD) have been prepared to report fiscal year 2007 and 2006 balances and activity relating to monies borrowed from the public and certain federal government accounts to fund the U.S. government's operations. Permanent, indefinite appropriations are available for the payment of interest on the federal debt and the redemption of Treasury securities. The Constitution empowers the Congress to borrow money on the credit of the United States. The Congress has authorized the Secretary of the Treasury to borrow monies to operate the federal government within a statutory debt limit. Title 31 U.S.C. authorizes Treasury to prescribe the debt instruments and otherwise limit and restrict the amount and composition of the debt. BPD, an organizational entity within the Fiscal Service of the Department of the Treasury, is responsible for issuing Treasury securities in accordance with such authority and to account for the resulting debt. In addition, BPD has been given the responsibility to issue Treasury securities to trust funds for trust fund receipts not needed for current benefits and expenses. BPD issues and redeems Treasury securities for the trust funds based on data provided by program agencies and other Treasury entities. The schedules were prepared in conformity with U.S. generally accepted accounting principles and from BPD's automated accounting system, Public Debt Accounting and Reporting System. Interest costs are recorded as expenses when incurred, instead of when paid. Certain Treasury securities are issued at a discount or premium. These discounts and premiums are amortized over the term of the security using an interest method for all long term securities and the straight line method for short term securities. The Department of the Treasury also issues Treasury Inflation-Protected Securities (TIPS). The principal for TIPS is adjusted daily over the life of the security based on the Consumer Price Index for all Urban Consumers. Notes to the Schedules of Federal Debt Managed by the Bureau of the Public Debt For the Fiscal Years Ended September 30, 2007 and 2006 (Dollars in Millions) Note 2. Federal Debt Held by the Public As of September 30, 2007 and 2006, Federal Debt Held by the Public consisted of the following: Total Federal Debt Held by the Public Treasury issues marketable bills at a discount and pays the par amount of the security upon maturity. The average interest rate on Treasury bills represents the original issue effective yield on securities outstanding as of September 30, 2007 and 2006, respectively. Treasury bills are issued with a term of one year or less. Treasury issues marketable notes and bonds as long-term securities that pay semi-annual interest based on the securities' stated interest rate. These securities are issued at either par value or at an amount that reflects a discount or a premium. The average interest rate on marketable notes and bonds represents the stated interest rate adjusted by any discount or premium on securities outstanding as of September 30, 2007 and 2006. Treasury notes are issued with a term of 2 – 10 years and Treasury bonds are issued with a term of more than 10 years. Treasury also issues TIPS that have interest and redemption payments, which are tied to the Consumer Price Index, a widely used measure of inflation. TIPS are issued with a term of 5 years or more. At maturity, TIPS are redeemed at the inflation-adjusted principal amount, or the original par value, whichever is greater. TIPS pay a semi-annual fixed rate of interest applied to the inflation-adjusted principal. The TIPS Federal Debt Held by the Public inflation-adjusted principal balance includes inflation of $50,517 million and $43,927 million as of September 30, 2007 and 2006, respectively. Federal Debt Held by the Public includes federal debt held outside of the U. S. government by individuals, corporations, Federal Reserve Banks (FRB), state and local governments, and foreign governments and central banks. The FRB owned $775 billion and $765 billion of Federal Debt Held by the Public as of September 30, 2007 and 2006, respectively. These securities are held in the FRB System Open Market Account (SOMA) for the purpose of conducting monetary policy. Notes to the Schedules of Federal Debt Managed by the Bureau of the Public Debt For the Fiscal Years Ended September 30, 2007 and 2006 (Dollars in Millions) Note 2. Federal Debt Held by the Public (continued) Treasury issues nonmarketable securities at either par value or at an amount that reflects a discount or a premium. The average interest rate on the nonmarketable securities represents the original issue weighted effective yield on securities outstanding as of September 30, 2007 and 2006. Nonmarketable securities are issued with a term of on demand to more than 10 years. As of September 30, 2007 and 2006, nonmarketable securities consisted of the following: State and Local Government Series Government Account Series (GAS) securities are nonmarketable securities issued to federal government accounts. Federal Debt Held by the Public includes GAS securities issued to certain federal government accounts. One example is the GAS securities held by the Government Securities Investment Fund (G-Fund) of the federal employees’ Thrift Savings Plan. Federal employees and retirees who have individual accounts own the GAS securities held by the fund. For this reason, these securities are considered part of the Federal Debt Held by the Public rather than Intragovernmental Debt Holdings. The GAS securities held by the G-Fund consist of overnight investments redeemed one business day after their issue. The net increase in amounts borrowed from the fund during fiscal years 2007 and 2006 are included in the respective Borrowings from the Public amounts reported on the Schedules of Federal Debt. Fiscal years-end September 30, 2007 and 2006, occurred on a Sunday and Saturday, respectively. As a result, $26,591 million and $31,656 million of marketable Treasury notes matured but not repaid is included in the balance of the total Federal Debt Held by the Public as of September 30, 2007 and 2006, respectively. Settlement of these debt repayments occurred on Monday, October 1, 2007, and Monday, October 2, 2006. Notes to the Schedules of Federal Debt Managed by the Bureau of the Public Debt For the Fiscal Years Ended September 30, 2007 and 2006 (Dollars in Millions) Foreign Service Retirement and Disability Fund National Service Life Insurance Fund Social Security Administration (SSA); Office of Personnel Management (OPM); Department of Health and Human Services (HHS); Department of Defense (DOD); Department of Labor (DOL); Federal Deposit Insurance Corporation (FDIC); Department of Energy (DOE); Department of Housing and Urban Development (HUD); Department of the Treasury (Treasury); Department of State (DOS); Department of Transportation (DOT); Department of Veterans Affairs (VA). Intragovernmental Debt Holdings primarily consist of GAS securities. Treasury issues GAS securities at either par value or at an amount that reflects a discount or a premium. The average interest rates for fiscal years 2007 and 2006 were 5.1 and 5.2 percent, respectively. The average interest rate represents the original issue weighted effective yield on securities outstanding as of September 30, 2007 and 2006. GAS securities are issued with a term of on demand to 30 years. GAS securities include TIPS, which are reported at an inflation-adjusted principal balance using the Consumer Price Index. As of September 30, 2007 and 2006, the inflation-adjusted principal balance included inflation of $28,643 million and $19,576 million, respectively. Fiscal years-ended September 30, 2007 and 2006, occurred on a Sunday and Saturday, respectively. As a result, $53 million and $360 million of GAS securities held by Federal Agencies matured but not repaid is included in the balance of the Intragovernmental Debt Holdings as of September 30, 2007 and 2006, respectively. Settlement of these debt repayments occurred on Monday, October 1, 2007 and Monday, October 2, 2006. Notes to the Schedules of Federal Debt Managed by the Bureau of the Public Debt For the Fiscal Years Ended September 30, 2007 and 2006 (Dollars in Millions) Note 4. Interest Expense Interest expense on Federal Debt Managed by BPD for fiscal years 2007 and 2006 consisted of the Federal Debt Held by the Public Net Amortization of Premiums and Discounts Total Interest Expense on Federal Debt Held by the Public Net Amortization of Premiums and Discounts (1,116) (3,269) Total Interest Expense on Intragovernmental Debt Total Interest Expense on Federal Debt Managed by BPD The valuation of TIPS is adjusted daily over the life of the security based on the Consumer Price Index for all Urban Consumers. This daily adjustment is an interest expense for the Bureau of the Public Debt. Accrued interest on Federal Debt Held by the Public includes inflation adjustments of $10,276 million and $14,512 million for fiscal years 2007 and 2006, respectively. Accrued interest on Intragovernmental Debt Holdings includes inflation adjustments of $378 million and $607 million for fiscal years 2007 and 2006, respectively. Notes to the Schedules of Federal Debt Managed by the Bureau of the Public Debt For the Fiscal Years Ended September 30, 2007 and 2006 (Dollars in Millions) Note 5. Fund Balance With Treasury The Fund Balance with Treasury, a non-entity, intragovernmental account, is not included on the Schedules of Federal Debt and is presented for informational purposes. In addition to the individual named above, Dawn B. Simpson, Assistant Director; Dean D. Carpenter; Emily M. Clancy; Dennis L. Clarke; Chau L. Dinh; Lisa M. Galvan-Treviño; Vivian M. Gutierrez; Erik S. Huff; Bret R. Kressin; Nicole M. McGuire; and Jay R. McTigue made key contributions to this report.
GAO is required to audit the consolidated financial statements of the U.S. government. Due to the significance of the federal debt held by the public to the governmentwide financial statements, GAO has also been auditing the Bureau of the Public Debt's (BPD) Schedules of Federal Debt annually. The audit of these schedules is done to determine whether, in all material respects, (1) the schedules are reliable and (2) BPD management maintained effective internal control relevant to the Schedule of Federal Debt. Further, GAO tests compliance with a significant selected provision of law related to the Schedule of Federal Debt. Federal debt managed by BPD consists of Treasury securities held by the public and by certain federal government accounts, referred to as intragovernmental debt holdings. The level of debt held by the public reflects how much of the nation's wealth has been absorbed by the federal government to finance prior federal spending in excess of federal revenues. Intragovernmental debt holdings represent balances of Treasury securities held by federal government accounts, primarily federal trust funds such as Social Security, that typically have an obligation to invest their excess annual receipts over disbursements in federal securities. In GAO's opinion, BPD's Schedules of Federal Debt for fiscal years 2007 and 2006 were fairly presented in all material respects and BPD maintained effective internal control relevant to the Schedule of Federal Debt as of September 30, 2007. GAO also found no instances of noncompliance in fiscal year 2007 with the statutory debt limit. As of September 30, 2007 and 2006, federal debt managed by BPD totaled about $8,993 billion and $8,493 billion, respectively. Total federal debt increased over each of the last 4 fiscal years. Total federal spending has exceeded total federal revenues which have resulted in increases in debt held by the public. Further, certain trust funds (e.g., Social Security) continue to run cash surpluses, resulting in increased intragovernmental debt holdings since the federal government spends these surpluses on other operating costs and replaces them with federal debt instruments. These debt holdings are backed by the full faith and credit of the U.S. government and represent a priority call on future budgetary resources. As a result, total gross federal debt has increased about 33 percent between the end of fiscal years 2003 and 2007. On September 29, 2007, legislation was enacted to raise the statutory debt limit by $850 billion to $9,815 billion. This was the third occurrence since the end of fiscal year 2003 that the statutory debt limit had to be raised to avoid breaching the statutory debt limit. During that time, the debt limit has increased by over $2.4 trillion, or about 33 percent, from $7,384 billion on September 30, 2003, to the current limit of $9,815 billion.
The use of IT is pervasive in the federal government as agencies have become dependent on computerized information systems and electronic data to carry out operations and to process, maintain, and report information. As our past work has shown, protecting federal systems and the information on them is essential because the loss or unauthorized disclosure or alteration of the information can lead to serious consequences and can result in substantial harm to individuals and the federal government. Specifically, ineffective protection of IT systems and information can result in threats to national security, economic well-being, and public health and safety; loss or theft of resources, including money and intellectual property; inappropriate access to and disclosure, modification, or destruction of sensitive information; use of computer resources for unauthorized purposes or to launch an attack on other computer systems; damage to networks and equipment; loss of public confidence; and high costs for remediation. While some incidents can be resolved quickly and at minimal cost, others may go unresolved and result in significant costs. Federal agencies rely extensively on contractors to provide IT services and operate systems to help carry out their missions. For example, we reported that in fiscal year 2012, the Department of Defense obligated approximately $360 billion for contracts for goods and services, such as information technology and weapon systems maintenance. The ability to contract for technology services can allow an agency to obtain or offer enhanced services without the cost of owning the required technology or maintaining the human capital required to deploy and operate it. Specifically, contractors and their employees provide services and systems to agencies at agency and contractor facilities, directly and by remote access. Services can include computer and telecommunication systems and services, and testing, quality control, installation, and operation of computer equipment. Federal laws require agencies to protect the privacy and security of federal data and information systems. To help protect against threats to federal systems, FISMA sets forth a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets, including those operated by contractors on behalf of the agency. It requires each agency to develop, document, and implement an information security program that includes the following components: periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; policies and procedures that are (1) are based on risk assessments, (2) cost-effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; subordinate plans for providing adequate information security for networks, facilities, and systems or group of information systems, as appropriate; security awareness training to inform personnel, including contractors, of information security risks and of their responsibilities in complying with agency policies and procedures designed to reduce these risks, as well as training personnel with significant security responsibilities for information security; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of information systems; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in the information security policies, procedures, and practices of the agency; procedures for detecting, reporting, and responding to security plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. Under FISMA, each agency CIO has the responsibility to ensure that agency information and information systems, including those operated by contractors, are being protected under the agency’s information security program. In addition, OMB’s annual FISMA reporting instructions require agencies to develop policies and procedures for agency officials to follow when performing oversight of the implementation of security and privacy controls by contractors. FISMA requires each agency to have an annual independent evaluation of its information security program and practices, including controls testing and compliance assessment. OMB guidance specifically requires each agency inspector general, or other independent auditor, to perform the evaluation, including the effectiveness of the agency’s contractor oversight. Additionally, inspectors general are to evaluate agency efforts in providing oversight of contractor employees who have privileged access to federal data and information systems. In addition to establishing responsibilities for agencies, FISMA assigns specific information security responsibilities to OMB and NIST: OMB is to develop and oversee the implementation of policies, principles, standards, and guidelines on information security in federal agencies. It is also responsible for reviewing, at least annually, and approving or disapproving agency information security programs. Further, OMB is to report annually to Congress on the implementation of FISMA by the agencies. NIST’s responsibilities include developing security standards and guidelines for agencies (other than for national security systems) that include standards for categorizing information and information systems according to ranges of risk levels, minimum security requirements for information and information systems in risk categories, guidelines for detection and handling of information security incidents, and guidelines for identifying an information system as a national security system. The Privacy Act of 1974 limits how federal agencies collect, disclose, or use personal information. Under this act, agencies are to, among other things, establish appropriate safeguards to ensure the security and confidentiality of personal information maintained in a system of records and protect it against anticipated security or integrity threats or hazards. The Privacy Act’s requirements also apply to government contractors and contractor employees who have access to or maintain agency systems of records that contain personally identifiable information. The Privacy Act requires that when agencies establish or make changes to a system of records, they must notify the public through a system-of-records notice in the Federal Register which includes the system safeguards for the security and confidentiality of personal information. Pub. L. No. 107-347 § 208 (Dec. 17, 2002); 44 U.S.C. § 3501 note. agencies must conduct PIAs before developing or procuring information technology that collects, maintains, or disseminates information that is in a personally identifiable form. In conducting PIAs, agencies are to ensure that the handling of the information conforms to applicable privacy legal requirements, determine risks, and examine and evaluate protections and alternative processes for handling information to mitigate potential privacy risks. To ensure that contractor-operated systems meet federal information security and privacy requirements, the Federal Acquisition Regulation (FAR) requires that agency acquisition planning for IT comply with the information technology security requirements in FISMA, OMB’s implementing policies including Appendix III of OMB Circular A-130, and NIST guidance and standards. The FAR addresses application of the Privacy Act to contractors at subpart 24.1, Protection of Individual Privacy. NIST Special Publications 800-53 and 800-53A guide agencies in selecting security and privacy controls for systems and assessing them to ensure that the selected controls are in place and functioning as expected. Additional NIST special publications on IT security services and risk management (Special Publications 800-35 and 800-37) identify several key activities important to contractor oversight for assessing the security and privacy controls of information systems. The key activities and the steps included in each are shown in table 3. The FAR establishes uniform policies and procedures for acquisition of supplies and services by executive agencies. The FAR and agency supplements are codified in title 48 of the Code of Federal Regulations. As relevant here, the FAR’s acquisition planning requirements for IT security are at 48 C.F.R. § 7.103(w). See also, FAR § 7.105(b)(16)(Government-furnished information) and (18)(Security considerations). NIST Special Publication 800-53 Rev. 4, Security and Privacy Controls for Federal Information Systems and Organizations, (Gaithersburg, MD: Apr. 2013) and NIST Special Publication 800-53A, Guide for Assessing the Security Controls in Federal Information Systems and Organizations, (Gaithersburg, MD: Jun. 2010). In April 2005, we reported on federal agencies’ implementation of FISMA requirements and their oversight of contractors and others with privileged access to federal data and systems. We determined that, although most agencies reported having written policies that addressed information security for contractor-provided IT services and systems, few had established specific policies for overseeing the information security practices of contractors and contractor employees to ensure compliance with contract requirements and agency information security policies. We concluded that, without specific oversight policies establishing when and how agencies will review contractor-operated systems, officials responsible for the systems may not be taking sufficient action to ensure that security requirements are being met. We recommended, among other things, that the Director of OMB ensure that federal agencies develop policies for ensuring the information security of contractors and other users with privileged access to federal data. Subsequently OMB modified its instructions to better ensure that federal agencies develop policies for ensuring the information security provided by contractor employees. Under FISMA, each agency CIO has the responsibility to ensure that agency information and information systems, including those operated by contractors, are being protected under the agency’s information security program. In addition, OMB’s annual FISMA reporting instructions require agencies to develop policies and procedures for agency officials to follow when performing oversight of the implementation of security and privacy controls by contractors. CIO oversight of the agency’s information security program provides agency officials assurance that they are protecting sensitive agency information. According to NIST, assessment of security and privacy controls is a key element of security program oversight. While the six agencies we reviewed generally established security and privacy requirements for contractors to follow and prepared for assessments to determine the effectiveness of contractor implementation of controls, five of the six were inconsistent in overseeing the execution and review of those assessments. Table 4 details the degree of implementation of oversight activities for the selected systems at each agency. While agencies performed two of the eight key steps in all cases, most were inconsistent in performing the remaining steps for the oversight of the selected contractor-operated systems. Specifically: Communicate requirements to contractors. Four of the six agencies (DOE, DHS, EPA, and OPM) communicated security and privacy requirements to contractors in contracts for the systems we reviewed. For example, a contract for one system at EPA specifically required the contractor to comply with the EPA information security policy. Two of the six agencies (DOT and State) did not always communicate security and privacy requirements to contractors in the contracts for agency systems. NIST guidance states that security requirements should be stated explicitly or by reference in contracts. However, while State’s departmental policies include references regarding contractor requirements for protecting personally identifiable information and system authorization, the contract for one system that we reviewed did not contain language that communicated these requirements. Furthermore, while the contract for one of the systems at DOT was modified to include requirements for background investigations, there was no language included that communicated agency security and privacy requirements. Officials for both agencies were not able to explain why this language was not included in the contracts. Without specific security requirements in the contract, these two systems are at an increased risk that contractors may not understand the requirements that they are expected to implement or cannot be held to the security and privacy requirements during contract performance. Select and document security and privacy controls. All 12 of the systems that we reviewed documented the security and privacy controls that were expected to be implemented for the system within the system security plan. According to NIST Special Publication 800- 37, system security plans are intended to provide an overview of the security requirements for the system and describe the security controls in place or planned for meeting those requirements. Each agency supported the selection of controls by documenting privacy risks and impacts to the systems we reviewed within a PIA, as called for when systems contain personally identifiable information. Select an independent assessor. Five of the six agencies ensured that assessors used for systems we reviewed were independent, as required by NIST. For moderate impact information systems and higher, such as the ones that we reviewed, NIST states that an independent third party reviewer should be used for the assessment to ensure that the review is unbiased. For example, for both systems we reviewed at OPM, the agency used a different contractor to assess the system and system officials took steps to verify that the assessor was independent. However, one agency, State, did not ensure that the assessors used for both systems we reviewed were independent. State officials allowed the contractor to select the assessor for both systems we reviewed and did not take steps to verify the assessor’s independence. State officials stated that they believe it was not their responsibility to ensure the independence of the assessors for the particular systems. As a result, the agency has reduced assurance that assessments were complete and unbiased. Develop a test plan. Five of the six agencies adequately documented test plans for the assessments of the two systems we reviewed at each agency. These plans documented the controls to be tested and appropriate assessment procedures. NIST Special Publication 800- 53A states that test plans are to document the objectives for the security control assessment and provide a detailed road map of how the assessors are to test the information security and privacy controls for the system. One of the six agencies (DOE) did not document which controls from the system security plan were to be tested and the assessment procedures that were to be followed for one system as the officials could not locate the system test plan. DOE officials stated that the computer housing the information became corrupted and the detailed test plan could therefore not be provided.stated that, in response to our audit, they now plan to expedite the development of a test plan to be executed this year. Without a detailed test plan, agency officials have reduced assurance that the selected controls were tested and that the correct tests were executed. Execute the test plan. One of the six agencies (DHS) effectively executed the test plan for the two systems we reviewed. For both of its systems, the controls from the test plan, were effectively tested, and for areas such as background investigation and contingency plan training evidence was provided showing that all of the contractors operating the system had received an investigation or training. NIST guidance calls for agencies to ensure that the test plan is appropriately executed during the assessment process. However, the system assessments that were performed by five of the six agencies (DOE, DOT, EPA, OPM, and State) were not always effective. For example, DOT and State did not always ensure that system assessments evaluated the extent to which background investigations had been conducted for contractor employees. Instead, the agencies relied on agency-wide testing of personnel security as a common control. However, agency-wide testing was not comprehensive enough to identify lapses that we found regarding background investigationssystems. Specifically, for one of the DOT systems we reviewed, for contractor personnel working on these department officials responsible for system testing had not evaluated whether the seven contractor employees working on the system had the required background investigation. When they did so in response to our audit, they found that three of them did not. Officials stated that they subsequently removed system access rights for the three contractor employees until their background investigations had been completed. For the department’s other selected system, DOT officials did not have evidence that 44 of 133 contractor employees had undergone a current background investigation. For the two State systems we reviewed, department officials responsible for these systems stated that they did not believe that it was necessary for them to check whether contractor employees had undergone a background investigation. However, the system security plans for both State systems had documented the selection of background investigations as applicable security controls, therefore calling for them to be included in the scope of testing. By not testing that all contractor employees operating a system have had an appropriate background investigation completed, agency officials lack assurance that contractor employees can be trusted with access to government information and systems. Furthermore, for 8 of the 12 systems that we reviewed, the agencies did not always ensure that system assessments accurately evaluated the extent to which contractor employees had completed contingency plan training as required. NIST guidance states that anyone with responsibilities for implementing the contingency plan should receive regular training on their role. However, DOE, DOT, EPA, OPM and State officials were unable to demonstrate that contractor employees had received the necessary training despite assessment results that stated they had. Additionally, DOT officials were unable to identify whether several staff members listed in the contingency plan of one system were federal employees or contractor personnel. Overseeing that key contractor staff had taken or completed the contingency plan training would provide increased assurance that contractor employees are familiar with their roles and responsibilities under the contingency plan. Recommend remediation actions. All 12 of the system assessments that we reviewed produced a report showing recommendations from the assessor. NIST states that, since results of the security control assessment ultimately influence the content of the system security plan and the plan of action and milestones, agency officials should review the security assessment report to determine the appropriate steps required to correct weaknesses and deficiencies identified during the assessment. Furthermore, assessors’ recommendations in the security assessment report are an important input into agency officials’ risk-based decisions on addressing weaknesses. All 12 of the system assessments that we reviewed included recommendations to address weaknesses. Review the assessment results. Three of the six agencies (DHS, EPA, and OPM) adequately reviewed assessment results, ensuring that all of the controls selected for the systems and the evaluation methods used for the controls were included. However, three of the six agencies (DOT, DOE, and State) did not adequately review the assessment results. For one system at DOT, DOE, and State, documentation provided by agency officials showed that a thorough review had not occurred by the authorizing official. NIST states that the systems authorizing official or designated representative is to assess the current security state of the system or the common controls inherited by the system. For the system at DOT, the test evidence for the media protection and physical security controls (25 total controls) that was documented, reviewed, and accepted was from a different DOT system. DOT officials confirmed that these controls had not been sufficiently tested and that they would test them as part of their next system assessment. At DOE, officials were able to provide the executive summary of the test results but could not show that the full test results had been reviewed. Agency officials stated that the computer housing the information became corrupted and the detailed test results could not be provided. In addition, at State, a system security control assessment did not document that 69 of the systems’ controls were tested, and the assessment showed that the results were reviewed and accepted. State officials were unable to provide a reason for this lapse. Without properly reviewing the assessment results, agency officials may lack assurance that all of the controls selected for a system are properly tested. Develop a plan of action and milestones for remediation of weaknesses. Three of the six agencies (OPM, DHS, and DOE) for both systems we reviewed maintained POA&M that included all of the NIST elements, such as estimated completion dates, resource allocation and issue identification. For example, for one system we reviewed at OPM, the POA&M is maintained by agency officials using a software application that includes the elements required by NIST. However, three of the six agencies (DOT, State, and EPA) did not always complete or update POA&Ms for their contractor-operated systems. Specifically, the POA&Ms for one of the two systems we reviewed at State and DOT were missing information such as estimated completion dates and resources that were assigned to resolution. Additionally, State did not include all of the weaknesses identified in the assessment report within the POA&M for the second system we reviewed. State officials stated that those weaknesses should have been captured in the POA&M. EPA could not provide an updated POA&M for one of the two systems we reviewed. Without complete or up-to-date POA&Ms, agencies increase the risk that identified weaknesses will not be resolved in a timely fashion. The responsibility for adequately mitigating risks arising from the use of contractor-operated systems remains with the agency. OMB’s annual FISMA reporting instructions require agencies to develop policies for information security oversight of contractors and the FAR, in its procedures for acquisition planning, requires agencies to ensure that information technology acquisitions comply with FISMA, OMB’s implementing policies, and NIST guidance and standards. Further, NIST SP 800-53 states that agencies should develop, document and implement a process that provides oversight to ensure that agency testing of security and privacy controls is planned and conducted consistent with organizational priorities. A contributing reason for shortfalls identified in agency oversight of contractors was that agencies had not documented procedures to direct officials in performing such oversight activities effectively. For example, system officials for one system at DOT that accepted assessment results for 25 controls from the wrong system; the department did not have procedures in place to direct officials on how to effectively review such test results. According to the Office of the CIO officials we interviewed from each of the six selected agencies, all information security and privacy policies and procedures of each agency apply to their federal employees and systems as well as to contractors, contractor employees, and contractor-operated systems, and the system assessment process is intended to provide assurance that these policies and procedures are being implemented. However, as described in the previous section, we found inconsistencies in the oversight of contractor-operated systems at five of the six agencies; none of the agencies had procedures in place to direct officials in how to conduct such oversight. Such inconsistencies may have been mitigated if procedures had been created, documented and implemented. For example, agency officials reviewing system assessment results could refer to a procedure outlining the necessary actions for an effective review of an assessment conducted by contractors. As a result, agency officials have less assurance that oversight activities are being performed consistently and effectively over all of their contractor-operated systems and weaknesses may go undetected and unresolved, such as having contractor employees operate a system without undergoing a background investigation. In fulfilling its responsibilities to develop guidance and oversee the implementation of FISMA, OMB has issued guidance for agencies to ensure that contractors and contractor employees meet agency information security and privacy requirements. Specifically, OMB issues annual FISMA reporting instructions to guide agencies as they report on their security requirements. The instructions state that agencies are responsible for ensuring that systems operated by contractors meet FISMA information security requirements. The OMB instructions also state that systems operated by contractors are to be reported as part of the agency’s system inventory, tested on an annual basis, and reviewed appropriately. OMB collects the information from agencies on their implementation of FISMA, and then provides a summary of the data in an annual report to Congress. Further, OMB has developed guidance over several years to assist agencies in assessing their contractors’ performance. The guidance ranges from providing agencies with overarching requirements for managing contractors to requiring agency officials with acquisition and procurement responsibilities to take specific actions. Examples of OMB guidance for agencies that address contractor management are shown in table 5. OMB has also tasked DHS with certain responsibilities assisting government-wide efforts to provide adequate, risk based, and cost- effective cybersecurity. OMB and DHS have met with agency CIOs, chief information security officers, and other agency officials to discuss and assist in developing focused strategies for improving their agency’s cybersecurity posture. OMB officials from the Office of E-Government and Information Technology stated that agency contractor oversight for contractor-operated systems is discussed as needed at these meetings and that assisting agencies in implementing OMB guidance and instructions provides greater protections for all of an agency’s systems, including contractor-operated systems. OMB officials from the same office stated that these meetings allow them to conduct outreach and education to assist agencies in understanding their guidance as necessary. OMB guidance to agencies for categorizing and reporting contractor- operated systems does not clearly define what a contractor-operated system is and consequently, agencies are interpreting the guidance differently. FISMA assigns OMB responsibilities to develop and oversee the implementation of policies, principles, standards, and guidelines on information security in federal agencies. Guidance provided as part of OMB’s annual FISMA reporting instructions identifies several types of contractor relationships an agency may have in using a contractor for a system. However, it does not specify which agency systems that have contractor relationships should be categorized as contractor-operated. We and the inspectors general have found that not all agencies are interpreting this guidance in the same manner. Specifically, two of the six agencies we reviewed did not report all of their systems that are operated by contractors on their behalf as “contractor-operated” in their FISMA submissions. For example, CIO officials from the State Department stated that the number of contractor-operated systems the department reported as part of its 2012 FISMA submission did not include all systems that are operated by contractors. Rather, officials stated that the department only reported those systems that are both owned and operated by contractors by this label and identifies systems that are government-owned as “agency-operated” even when contractors operate the system on behalf of the department. Conversely, DHS officials stated that it reports systems as “contractor operated” only when they are government-owned but operated by contractors. DHS systems that are both owned and operated by contractors are designated by a third category known as an “external information system.” Those systems are not included in either the list of department’s agency-operated systems or contractor-operated systems. Additionally, in their fiscal year 2012 annual FISMA reports, inspectors general from 9 of the 24 major agencies found data reliability issues with their agencies’ categorization of contractor-operated systems.example, DOT’s Inspector General reported that 24 of the Department’s 60 information systems were owned and operated by a contractor, but only 4 were categorized as being contractor-operated by the department. OMB officials from the Office of E-Government and Information Technology stated that they believe the current guidance is sufficient to assist agencies in categorizing and reporting contractor-operated systems. OMB officials stated that, while they would not be able to identify all of the types of relationships that agencies have with contractors, agencies can refer to the guidance contained within the FISMA reporting instructions and its outline of five different categories of relationships that agencies may have with contractors operating systems or processing information on the agencies’ behalf. Nevertheless, the inconsistent implementation of OMB’s reporting guidance by agencies in reporting the number of contractor-operated systems demonstrates that existing outreach and education efforts during face-to-face meetings with agency information security officials are not always resulting in accurate reporting of agencies’ reliance on contractors to operate systems and process government information on their behalf. Consequently, agencies are not reporting all of their contractor-operated systems in their FISMA submissions, the information is not complete enough to provide OMB with an accurate representation of the number of contractor-operated systems within the government, and OMB’s report to Congress on the implementation of FISMA is not complete. Without complete information about contractor-operated systems, OMB and DHS may limit their ability to assist agencies in improving their cybersecurity postures and Congress will not have complete information on the implementation of FISMA. FISMA requires NIST to develop security standards and guidelines for agencies (other than national security systems), including when information and information systems are used or operated by a federal contractor on behalf of an agency. OMB Circular A-130 states that GSA should provide agencies with security guidance when acquiring information technology products or services. To meet its FISMA requirements, NIST has issued guidance for agencies to follow in overseeing contractors as they operate, use, and access government information and information systems. It has produced numerous information security standards and guidelines and has updated existing information security publications to assist agencies in developing and implementing an information security program that manages risks, including those risks incurred through the use of contractors. For example, in April 2013, NIST released its fourth update of a key federal government computer security control guide, Special Publication 800-53: Security and Privacy Controls for Federal Information Systems and Organizations, which contains an external service providers section that has requirements for agencies to ensure that contractors meet the same requirements that agencies adhere to and recommends that agencies incorporate oversight controls such as establishing personnel security requirements, requiring third-party providers to notify the agency of personnel transfers, and monitoring provider compliance. Further, the guide includes an appendix on identifying and implementing controls for protecting privacy within an organization, including contractors. Further, NIST officials stated that the risks incurred from utilizing contractors should be addressed by incorporating the risk management framework as part of the terms and conditions of the contract and that agencies can specify the security controls for which the contractor must implement and require appropriate evidence to demonstrate that they effectively implemented the specified controls. The risk management framework, specified in NIST Special Publication 800-37, provides a process that integrates information security and risk management activities into the system development life cycle. GSA, in meeting its responsibilities under OMB Circular A-130 issued guidance, templates, and established pre-negotiated contracts for agencies using contractors to maintain agency information and systems. GSA has also published an acquisition manual to assist agencies when they choose to create their own contracts. Additionally, GSA has negotiated contracts for products and services to assist agencies transitioning to cloud computing services. Cloud computing services are being managed through FedRAMP, a government-wide program to provide joint authorization and continuous security monitoring services for all federal agencies. GSA creates standardized FedRAMP templates for contract language and security assessments, among other things, and sample service level agreements for use in cloud service acquisitions. The six agencies reviewed made efforts to assess the implementation of security and privacy controls for selected contractor-operated systems. The agencies generally had established security and privacy requirements for contractors to follow and prepared for assessments to determine the effectiveness of contractor implementation of controls. However, oversight of the execution and review of assessments of contractor-operated systems was not consistent at five of the six agencies we reviewed. Specifically, agencies did not always prepare, execute, and review assessments of their contractor-operated systems. A contributing reason for these shortfalls is that agencies had not documented procedures for officials to follow in order to perform such oversight of contractors effectively. Until these agencies develop, document and implement specific procedures for overseeing contractors, they will have reduced assurance that the contractors are adequately securing and protecting agency information, including of the extent to which contractors have undergone background investigations. OMB, NIST, and GSA have provided agencies guidance to assist in implementing privacy and security controls for contractor-operated systems. In addition, OMB and DHS are taking actions to assist agencies in planning to improve their cybersecurity posture. However, the lack of clear instructions to agencies for reporting contractor-operated systems has contributed to incomplete information regarding the number of contractor-operated systems within the government. Without complete information, OMB and DHS assistance to agencies for improving their cybersecurity postures is limited and Congress will not have complete information on the implementation of FISMA. To ensure that the privacy and security controls of contractor-operated systems are being properly overseen, we are making 15 recommendations to five selected agencies. We recommend that the Secretary of Energy develop, document, and implement oversight procedures for ensuring that, for each contractor- operated system: a system test plan is developed, a system test is fully executed, and test results are reviewed by agency officials. We recommend that the Secretary of State develop, document, and implement oversight procedures for ensuring that, for each contractor- operated system: an independent assessor is selected to assess the system, a system test is fully executed, test results are reviewed by agency officials, and plans of action and milestones with estimated completion dates and security and privacy requirements are communicated to contractors, resources assigned for resolution are maintained. We recommend that the Secretary of Transportation develop, document, and implement oversight procedures for ensuring that, for each contractor-operated system: a system test is fully executed, test results are reviewed by agency officials, and plans of action and milestones with estimated completion dates and security and privacy requirements are communicated to contractors, resources assigned to resolution are maintained. We recommend that the Administrator of the Environmental Protection Agency develop, document, and implement oversight procedures for ensuring that, for each contractor-operated system: a system test is fully executed and plans of action and milestones with estimated completion dates and resources assigned for resolution are maintained. We recommend that the Director of the Office of Personnel Management develop, document, and implement oversight procedures for ensuring that a system test is fully executed for each contractor-operated system. To be able to effectively assist agencies with their contractor oversight programs, we recommend that the Director of the Office of Management and Budget, in collaboration with the Secretary of Homeland Security develop and clarify reporting guidance to agencies for annually reporting the number of contractor-operated systems. We received comments on a draft of this report from five of the six agencies to which we made recommendations. We requested comments from the Office of Management and Budget, but none were provided. The Departments of Energy, State, and Transportation, the Environmental Protection Agency and the Office of Personnel Management, generally agreed with our recommendations. A summary of their comments and our responses, where appropriate, are provided below. In written comments, the Chief Information Officer for DOE stated that the department is working to align with the recommendations. For the one system where the department could not produce the test plan or show evidence that the plan had been executed, the department has targeted that system for a new security test and evaluation. DOE’s full comments are provided in appendix II. In written comments, the acting Comptroller of the Department of State stated that the department agrees with our recommendations and is planning to develop, document, and implement oversight procedures for each contractor-operated, contractor-owned system. Additionally, he stated that department entities will seek to ensure the privacy and security controls of all contractor-operated systems. State’s comments are provided in appendix III. The Deputy Director of Audit Relations from DOT stated via e-mail that the department agrees to consider our recommendations. We continue to believe that the department needs to develop, document, and implement oversight procedures for each contractor-operated system. system had not been updated by EPA since 2011. As a result of this information, we have modified the report and recommendation as appropriate. EPA’s comments are provided in appendix IV. In written comments, the OPM Chief Information Officer concurred with our recommendation and stated that OPM will review its policies and procedures to further enhance OPM’s oversight of contractor- operated systems. OPM’s comments are provided in appendix V. In addition, the three agencies covered by our review that did not receive recommendations also reviewed our draft. In written comments DHS’s Director of the Departmental GAO-OIG Liaison Office stated that although the Department did not receive a recommendation in this report, it will collaborate with OMB to update the FISMA guidance in support of our recommendation to OMB. DHS’s comments are provided in appendix VI. The other two agencies—GSA and NIST—responded via e-mail that they had no comment on the report through a representative of GSA’s GAO/IG Audit Response Division and a representative of NIST’s Management and Organization Division. We also received technical comments from the Department of State, which we addressed as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Administrators of the Environmental Protection Agency and the General Services Administration, the Directors of the Office of Management and Budget and the Office of Personnel Management, the Secretaries of Energy, Homeland Security, State, and Transportation, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-6244 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. Our objectives were to assess the extent to which (1) selected agencies oversee the security and privacy controls for systems that are operated by contractors on their behalf and (2) executive branch agencies with government wide guidance and oversight responsibilities have taken steps to assist agencies in ensuring implementation of information security and privacy controls by contractors. For our first objective, we selected a non-generalizable sample of six Chief Financial Officers Act agencies. The six agencies selected were the Departments of Energy, Homeland Security, State, and Transportation; the Environmental Protection Agency; and the Office of Personnel Management. We selected these 6 agencies based on the number of contractor operated systems reported by the 24 Chief Financial Officer Act agencies from fiscal year 2011 Federal Information Security Management Act (FISMA) data. Specifically, we identified the eight agencies with the largest reported number of contractor-operated systems as high, the next eight agencies as medium, and the last eight agencies as low. We then selected the top two from the high, medium, and low groupings in order to review agencies that had a reported range in the number of contractor operated systems. To gain insight into the six agencies’ practices for protecting the security and privacy of information and systems, we interviewed officials and reviewed documentation regarding their policies and procedures for overseeing contractor privacy and security practices, including reviewing each agency’s policies and procedures for identifying risks and vulnerabilities, providing security awareness training to personnel with significant information security responsibilities, developing plans of action and milestones, developing incident response plans, and testing of system security controls. We conducted interviews with officials from each agency’s Office of the Chief Information Officer, Privacy Office, procurement office, as well as system owners to understand how they oversee the implementation of the Federal Acquisition Regulation (FAR), FISMA, and Privacy Act requirements, relevant OMB policies, National Institute of Standards and Technology (NIST) guidance, and agency-wide and system-level policies and procedures. To understand how well agencies’ were overseeing the implementation of agency requirements by contractors, we reviewed oversight efforts by agencies at the system level. We selected two systems at each agency using a non-generalizable random sample from a list of agency provided contractor-operated systems that were identified as having personally identifiable information and as either government-owned and contractor- operated and contractor-owned and contractor-operated. We examined whether the agencies for each selected system implemented oversight over key elements of federal requirements and guidance such as the FAR; FISMA; the Privacy Act, and NIST and Office of Management and Budget (OMB) guidance. The key elements were communicating requirements to contractors, selecting and documenting security controls, selecting an independent assessor, developing a test plan, executing the test plan, recommending remediation actions, reviewing results, and developing a plan of action and milestones. We also assessed whether agency officials at selected information systems implemented policies and procedures set forth by the agency, including contractor oversight activities performed by the responsible agency official. For our second objective, we reviewed requirements and guidance provided by OMB, NIST, and the General Services Administration (GSA) to agencies used to assist them in conducting contractor oversight. We interviewed DHS, GSA, OMB, and NIST officials regarding the policies and procedures for overseeing contactor privacy and security, and the activities taken to provide assistance to agencies regarding oversight of contractor-operated systems. We analyzed agency responses to OMB and DHS guidance regarding contractor-operated systems. We also reviewed inspectors general FISMA reports to assess agencies progress in meeting FISMA reporting requirements related to contractor security, including the reliability of the reporting of contractor-operated systems by agencies. We also interviewed officials of inspectors’ general of the agencies we reviewed for the first objective to discuss how the agency’s are reporting their inventory of systems. We conducted this performance audit from February 2013 to July 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, the following staff also made key contributions to the report: Nicholas Marinos (assistant director), Melina Asencio, Sher’rie Bacon, Kathleen Feild, Nancy Glover, Wilfred Holloway, Thomas Johnson, David Plocher, and Jeffrey Woodward.
Federal agencies often rely on contractors to operate computer systems and process information on their behalf. Federal law and policy require that agencies ensure that contractors adequately protect these systems and information. GAO was asked to evaluate how well agencies oversee contractor-operated systems. The objectives of this report were to assess the extent to which (1) selected agencies oversee the security and privacy controls for systems that are operated by contractors on their behalf and (2) executive branch agencies with government-wide guidance and oversight responsibilities have taken steps to assist agencies in ensuring implementation of information security and privacy controls by such contractors. To do this, GAO selected six agencies based on their reported number of contractor-operated systems and two systems at each agency using a non-generalizable random sample for review, analyzed agency policies and procedures, and examined security and privacy-related artifacts for selected systems. GAO also interviewed agency officials, and reviewed federal guidance and evaluated agency FISMA submissions. Although the six federal agencies that GAO reviewed (the Departments of Energy (DOE), Homeland Security (DHS), State, and Transportation (DOT), the Environmental Protection Agency (EPA) and the Office of Personnel Management (OPM)) generally established security and privacy requirements and planned for assessments to determine the effectiveness of contractor implementation of controls, five of the six agencies were inconsistent in overseeing the execution and review of those assessments, resulting in security lapses. For example, in one agency, testing did not discover that background checks of contractor employees were not conducted. The following table shows the degree of implementation of oversight activities at selected agencies. A contributing reason for these shortfalls is that agencies had not documented procedures for officials to follow in order to effectively oversee contractor performance. Until these agencies develop, document, and implement specific procedures for overseeing contractors, they will have reduced assurance that the contractors are adequately securing and protecting agency information. The Office of Management and Budget (OMB), the National Institute of Standards and Technology, and the General Services Administration have developed guidance to assist agencies in ensuring the implementation of security and privacy controls by their contractors. However, OMB guidance to agencies for categorizing and reporting on contractor-operated systems is not clear on when an agency should identify a system as contractor-operated and therefore agencies are interpreting the guidance differently. In fiscal year 2012, inspectors general from 9 of the 24 major agencies found data reliability issues with agencies' categorization of contractor-operated systems. Without accurate information on the number of contractor-operated systems, OMB assistance to agencies to help improve their cybersecurity posture will be limited and OMB's report to Congress on the implementation of the Federal Information Security Management Act (FISMA) is not complete. GAO is recommending that five of the six selected agencies develop procedures for the oversight of contractors and that OMB clarify reporting instructions to agencies. The five agencies generally agreed with the recommendations and OMB did not provide any comments.
DOD operates a worldwide supply system to buy, store, and distribute inventory items. Through this system, DOD manages several million types of consumable items, most of which are managed by DLA. DLA is DOD’s largest combat support agency, providing worldwide logistics support in both peacetime and wartime to the military services as well as civilian agencies and foreign countries. DLA supplies almost every consumable item the military services need to operate. To do this, DLA operates three supply centers, including the Defense Supply Center in Philadelphia, Pennsylvania which is responsible for procuring nearly all the food, clothing, and medical supplies used by the military. In addition, DLA has supply centers in Richmond, Virginia and Columbus, Ohio. The Defense Distribution Center operates a worldwide network of 25 distribution depots that receive, store, and issue supplies. In addition, DLA’s Defense Energy Support Center has the mission of purchasing fuel for the military service and other defense agencies. DLA also helps dispose of excess or unusable materiel and equipment through its Defense Reutilization and Marketing Service. To meet its mission, DLA relies on contractors as suppliers of the commodities and as providers of services including the acquisition and distribution of certain commodities. Traditionally, DLA buys consumable items in large quantities, stores them in distribution depots until they are requested by the military services, and then ships them to a service facility where they are used. For example, DLA procures military uniforms through competitive contracts. Defense Supply Center-Philadelphia’s Clothing and Textile Directorate procures commodities such as battle dress uniforms, footwear, and body armor directly from contractors and stores them until they are needed by the services. DLA also relies on service contractors to help with the acquisition, management, and distribution of commodities. For example, DLA has a prime vendor arrangement in which a distributor of a commercial product line provides those products and related services to all of DLA’s customers in an assigned region within a specified period of time after order placement. Under the prime vendor process, a single vendor buys items from a variety of manufacturers and the inventory is stored in commercial warehouses. A customer orders the items from the prime vendor. Once the Defense Supply Center-Philadelphia approves the order, the prime vendor fills, ships, and tracks the order through final acceptance. The prime vendor then submits an invoice to Defense Supply Center-Philadelphia, which authorizes payment to the prime vendor and bills the customer. According to DLA, the benefits of prime vendor contracts include improved access to a wide range of high-quality products, rapid and predictable delivery, and reduced overhead charges. Other benefits of prime vendor contracts include significant reductions in the manpower needed to manage and warehouse these items at DLA and reduced transportation costs. In addition, prime vendor contracts provide for surge and broader mobilization capabilities, and worldwide customer support. DLA also uses service contractors to provide services other than the acquisition of commodities. For example, the Defense Reutilization and Marketing Service uses contractors to support the disposal of government equipment and supplies considered surplus or unnecessary to DOD’s mission. Similarly, DLA uses service contractors to provide oversight, audit, and verification procedures for the destruction of DOD scrap property; operate Defense Reutilization and Marketing Office locations around the world including sites in Kuwait, Iraq, and Afghanistan; and run the Defense Distribution Center, Kuwait, Southwest Asia which provides distribution services and surge capability to all four service components to support the warfighters operating in the region. Current commodities distributed by the center are repair parts, barrier/construction materiel, clothing, textiles, and tents. The center also provides consolidated shipment and containerization services, as well as, routine logistic support to the military community in the U.S. Central Command’s theater of operations. DLA determines what and how many items it buys based on requirements from its military service customers. Without a good understanding of customers’ projected needs, DLA is not assured it is buying the right items in the right quantities at the right time. Properly defined requirements are therefore fundamental to obtaining good value for contracts administered through DLA. As with any contracting decision, a prerequisite to good outcomes is a match between well-defined requirements and available resources. Our previous testimonies before this committee on weapons system acquisition and service contracts have highlighted several cases where poorly defined and changing requirements have contributed to increased costs, as well as services that did not meet the department’s needs. We also noted that the absence of well-defined requirements and clearly understood objectives complicates efforts to hold DOD and contractors accountable for poor acquisitions outcomes. In addition, requirements which are based on unrealistic assumptions make it impossible to execute programs that are within established cost, schedule, and performance targets. Our prior work has identified instances where problems in properly defining requirements can lead to ineffective or inefficient management of commodities. Inaccurate demand forecasting may result in inventory that does not match demand. The military services and DLA manage the acquisition and distribution of spare parts for defense weapon systems. Whereas the military services manage their own reparable spare parts, DLA provides the services with most of their consumable parts—that is, items of supply that are normally expended or intended to be used up beyond recovery. In prior work, we have reported that the Air Force, the Navy, and the Army had acquired billions of dollars of spare parts in excess to their current requirements. For example, for fiscal years 2004 to 2007, the Army had on average about $3.6 billion of spare parts inventory that exceeded current requirements, while also having inventory deficits that averaged about $3.5 billion. During that same time period, the Navy had secondary inventory that exceeded current requirements by an average of $7.5 billion dollars, or 40 percent of total inventory. Mismatches between inventory levels and current requirements were caused in part by inaccurate demand forecasting. In our Navy work, for example, we noted that requirements frequently changed after purchase decisions had been made and that the Navy had not adjusted certain inventory management practices to account for the unpredictability in demand. The military services’ difficulty in forecasting demand for spare parts is among the reasons we have placed DOD’s supply chain management on our high-risk series since 1990. In addition, we are currently reviewing DLA’s management of consumable spare parts for its service customers. We are evaluating (1) the extent that DLA’s spare parts inventory reflects the amounts needed to support current requirements and (2) the factors that have contributed to DLA having any excesses or deficits in secondary inventory. As part of our review, we expect to report on how demand forecasting may affect inventory levels compared with requirements and what actions DLA is taking to understand and mitigate problems with demand forecasting. Inaccurate requirements and supply forecasts can affect the availability of critical supplies and inventory for the military, which, in turn, can result in a diminished operational capability and increased risk to troops. For example, as we reported in 2005, the Army’s failure to conduct an annual update of its war reserve requirements for spare parts since 1999, as well the Army’s continued decisions to not fully fund war reserve spare parts, resulted in the inventory for some critical items being insufficient to meet initial wartime demand during Operation Iraqi Freedom. These items included lithium batteries, armored vehicle track shoes, and tires for 5-ton trucks, where demand exceeded supply by over 18 times the amount on hand. Similarly, while DLA had a model to forecast supply requirements for contingencies, this model did not produce an accurate demand forecast for all items, including Meals Ready-to-Eat. Therefore, Army officials had to manually develop forecasts for Operation Iraqi Freedom, but did not always have sufficient or timely information needed to forecast accurate supply requirements. As a result, they underestimated the demand for some items. For example, demand for Meals Ready-to-Eat exceeded supply in February, March, and April 2003, when monthly demand peaked at 1.8 million cases, while the inventory was only 500,000 cases. Some combat support units came within a day or two of exhausting their Meals Ready-to-Eat rations, putting Army and Marine Corps units at risk of running out of food if the supply distribution chain was interrupted. Unrealistic time frames for acquisition and delivery of commodities can also have negative impacts on obtaining value. We previously testified that the Army’s decision to issue black berets to all of its forces in just 8 months placed enormous demands on DOD’s procurement system. Due to the extremely short time frame for delivery of the berets to the Army, DLA contracting officials took a number of actions to expedite award of the contracts, including undertaking contract actions without providing for “full and open” competition as required by the Competition in Contracting Act of 1984. According to contract documents, the contract actions were not competed because of an “unusual and compelling urgency,” one of the circumstances permitting other than full and open competition. Despite these actions, DLA was unable to meet its deadline due to quality and delivery problems and had to terminate several contracts because the contractors could not meet delivery requirements. When contracting for commodities or services, DLA has a number of choices regarding the contracting arrangements to use. Selecting the appropriate type is important because certain contracting arrangements may increase the government’s cost risk whereas others transfer some of that cost risk to the contractor. We have previously testified before this committee that once the decision has been made to use contractors to support DOD’s missions or operations, it is essential that DOD clearly defines its requirements and employs sound business practices, such as using appropriate contracting vehicles. For example, we testified that we had found numerous issues with DOD’s use of time-and-materials contracts that increased the government’s risks. These contracts are appropriate when specific circumstances justify the risks, but our findings indicate that they are often used as a default for a variety of reasons— ease, speed, and flexibility when requirements or funding are uncertain. Time-and-materials contracts are considered high risk for the government because they provide no positive profit incentive to the contractor for cost control or labor efficiency and their use is supposed to be limited to cases where no other contract type is suitable. With regard to commodities, it is equally important that DLA use the appropriate contracting arrangements to result in the best value and lowest risk to the government. Our prior work over the past 10 years and the work of others has identified instances where using the wrong contracting arrangement led to the ineffective or inefficient acquisition of commodities. For example, as discussed above, when DLA was tasked to purchase black berets for the Army, the extremely short time frame placed DOD in a high-risk contracting situation. In their eagerness to serve the customer, DLA contracting officials shortcut normal contracting procedures to expedite awarding the contracts, allowing little time to plan for the purchase of the berets and little room to respond to production problems. In awarding a contract to one foreign firm, using other than full and open competition, the DLA contracting officer was confronted with a price that was 14 percent higher than the price of the domestic supplier. However, the contracting officer performed a price analysis and determined the price was fair and reasonable, explaining that given the deadline, there was no time to obtain detailed cost or pricing data, analyze those data, develop a negotiation position, negotiate with a firm, and then finally make the award. When competition was introduced into the process at a later date, prices declined. As another example of higher costs resulting from using a particular contract type to acquire commodities, we reported in July 2004 that the Air Force had used the Air Force Contract Augmentation Program contract to supply commodities for its heavy construction squadrons. While contractually permitted, the use of a cost- plus-award-fee contract as a supply contract may not be cost-effective. Under such contracts, the government reimburses the contractors’ costs and pays an award fee that may be higher than warranted given the contractors’ low level of risk when performing such tasks. Air Force officials recognized that the use of a cost-plus-award-fee contract to buy commodities may not be cost-effective and under the current contract commodities may be obtained using a variety of contracting arrangements. Similarly we noted in a 2007 report on the Army Corps of Engineers Restore Iraqi Oil Contract that DLA’s Defense Energy Support Center was able to purchase fuel and supply products for the forces in Iraq more cheaply than the contractor because the Defense Energy Support Center was able to sign long-term contracts with the fuel suppliers, an acquisition strategy the contractor did not pursue because of the incremental funding provided by the Army. In addition, in 2008, the DOD Inspector General found that DLA was unable to effectively negotiate prices or obtain best value for noncompetitive spare parts when it contracted with an exclusive distributor—a company that represents parts suppliers to the U.S. government. Furthermore, the DOD Inspector General concluded that the exclusive distributor model was not a viable procurement alternative for DOD in part because of excessive pass-through charges, increased lead times to DOD, and an unnecessary layer of redundancy and cost. Our prior work reported that DLA has taken some steps to determine if the appropriate contracting arrangement is being used or if contractors should be used at all. As we reported in 2006, DLA has recognized that the prime vendor concept may not be suitable for all commodities and has begun adjusting acquisition strategies to reassign programs to a best procurement approach. For example, DLA evaluated the acquisition of food service equipment and determined not to continue acquiring food service equipment through a prime vendor. Instead, DLA decided to develop a new acquisition strategy that will require the development of a contractual relationship primarily with manufacturers or their representatives for equipment and incidental services. DLA has also initiated several actions aimed at strengthening oversight, such as modifying contracts to change the price verification process and establishing additional training for contracting officers and managers. In addition, DLA has taken some steps to determine whether contractors are the most efficient means to meet certain requirements. For example, in 2005, DLA conducted a public-private competition for warehousing functions at 68 sites used for disposing of surplus or unnecessary government equipment and supplies. DLA ultimately determined that it was more cost effective to retain the government employees at these sites than convert to contractor performance. In addition to ensuring requirements for contracts awarded through DLA have been properly defined and the appropriate type of contract has been put in place, proper contract oversight and management is essential to ensure DOD gets value for taxpayers’ dollars and obtains quality commodities or services in a cost-efficient and effective manner. Failure to provide adequate oversight hinders the department’s ability to address poor contractor performance and avoid negative financial and operational impacts. In previous testimony before this committee, we noted that we have reported on numerous occasions that DOD did not adequately manage and assess contractor performance to ensure that its business arrangements were properly executed. Managing and assessing post award performance entails various activities to ensure that the delivery of services meets the terms of the contract and requires adequate surveillance resources, proper incentives, and a capable workforce for overseeing contracting activities. If surveillance is not conducted, is insufficient, or not well documented, DOD is at risk of being unable to identify and correct poor contractor performance in a timely manner. As an agency responsible for billions of dollars of contracts for commodities and services, it is important that DLA ensure effective contract oversight and management and thereby obtain those commodities and services in an economic and efficient manner. However, we have identified several long-standing challenges that hinder DOD’s effective management of contractors, including the need to ensure adequate personnel are in place to oversee and manage contractors, the importance of training, and the need to collect and share lessons learned. Our prior work has found while these challenges have affected DLA’s ability to obtain value, in some cases DLA has also taken actions to address these challenges. First, having the right people with the right skills to oversee contractor performance is critical to ensuring the best value for the billions of dollars spent each year on contractor support. DOD’s difficulty in ensuring appropriate oversight of contractors exists is among the reasons DOD contract management has been on GAO’s high-risk series since 1992. While much of our work on contract management has been focused on weapons system acquisition and service contractors, we have found similar challenges with DOD’s acquisition of commodities. In June 2006, we found that DLA officials were not conducting required price reviews for the prime vendor contracts for food service equipment and construction and equipment commodities. For example, the contracts for food service equipment required verification of price increases, but officials from the supply center were unable to provide documentation on why the price of an aircraft refrigerator increased from $13,825 in March 2002 to $32,642 in September 2004. Both logistics agency and supply center officials acknowledged that these problems occurred because management at the agency and supply center level were not providing adequate oversight to ensure that contracting personnel were monitoring prices. We also found poor contract management can cause lapses in contract support and can lead to operational challenges, safety hazards and waste. For example, in 2007 DLA was given the responsibility to contract for services to de-gas, store, and refill gas cylinders in Kuwait. Warfighters use gas cylinders for a variety of purposes including, but not limited to, caring for those who are hospitalized, equipment maintenance, and construction. However, as of July 2009, DLA has yet to compete and execute this contract. As a result, instead of receiving refilled cylinders from Kuwait, warfighters are continually buying full gas cylinders from local markets in the Middle East. This may lead to operational challenges and waste as warfighters must make efforts to purchase gases in Iraq while cylinders that could be refilled remain idle in Kuwait. A second long-term challenge for DOD’s contract oversight and management is training. We have made multiple recommendations over the last decade that DOD improve the training of contract oversight personnel. We have found that DLA has recognized the need to improve training. As discussed above, our June 2006 report found that DLA officials were not conducting required price reviews for some prime vendor contracts. Senior DLA officials acknowledged that weaknesses in oversight led to pricing problems and stated that they were instituting corrective actions. Among the weaknesses were the lack of knowledge or skills of contracting personnel and a disregard for the contracting rules and regulations. To address this weakness, DLA has established additional training for contracting officers and managers. In addition, DOD concurred with our recommendation that the Director, DLA provide continual management oversight of the corrective actions taken to address pricing problems. DLA has also taken some actions to help ensure that contracting officer’s representatives are properly trained. For example, DLA’s Defense Reutilization and Marketing Service has recognized that performance-based contracts will only be effective if contracting officer’s representatives accurately report contractor performance and contracting officers take appropriate actions. DLA has established contracting officer’s representative training requirements to ensure these individuals are properly trained to carry out their responsibilities. These requirements increase for contracts that are more complex or present higher risks to the government. While we have not evaluated the performance of DLA contracting officer’s representatives, our previous work shows that when contracting officer’s representatives are properly trained, they can better ensure that contractors provide services and supplies more efficiently and effectively. In addition, a working group from DOD’s panel on contracting integrity in September 2008 recognized the importance of more in-depth contracting officer’s representative training for more complex contracts or contracts that pose a greater risk to DOD. In February 2009, we reported that businesses and individuals that had been excluded from receiving federal contracts for egregious offenses continued to be awarded contracts. Our work demonstrated that most of the improper contracts and payments identified can be attributed to ineffective management of the governmentwide database which tracks excluded contractor information or to control weaknesses at both the agency which excluded the contractor and the contracting agency. Specifically, our work showed that excluded businesses continued to receive federal contracts from a number of agencies, including DLA, because officials (including contracting officers) at some agencies failed to enter complete information in the database in a timely manner or failed to check the database prior to making contract awards. In addition, some agencies like DLA used automated purchasing systems which did not interface with the database. In commenting on our report agency officials stated that most of the issues we identified could be solved through improved training. A third long-term challenge for DOD’s contract oversight and management is the need to collect and share institutional knowledge on the use of contractors, including lessons learned and best practices. Our prior work has found that DLA has taken some actions to improve the collection as well as the application of lessons learned. For example, in January 2000, we identified DLA’s prime vendor program as an example of DLA adopting a best commercial practice for inventory management. Our work found that DLA was developing a policy to establish the basis for lessons learned from the reviews of prime vendor programs. Key points of the policy include specific requirements for management oversight such as pricing and compliance audits; requiring all prime vendor contracts to comply with an established prime vendor pricing model; annual procurement management reviews for all prime vendor contracts; and requiring advance approval by headquarters for all prime vendor contracts, regardless of dollar value. However, because this policy was still in draft form at the time of our review, we did not evaluate it. In closing, Mr. Chairman, DLA has a key role in supporting the warfighter by providing a vast array of logistics support. In providing this support, DLA depends on contractors and as such must ensure that it is obtaining good value for the billions of dollars it spends every year. Regardless of whether DLA is buying commodities or services, well-defined requirements, appropriate contract types, and proper contract oversight and management are critical to ensuring that DLA gets what it pays for. Mr. Chairman and members of the committee, this concludes my testimony. I would be happy to answer any questions you might have. For further information about this testimony, please contact William Solis, Director, Defense Capabilities and Management, on (202) 512-8365 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Other key contributors to this testimony include Carole Coffey, Lionel Cooper, Laurier Fish, Thomas Gosling, Melissa Hermes, James A. Reynolds, Cary Russell, Michael Shaughnessy, and Marilyn Wasleski. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The nation's ability to project and sustain military power depends on effective logistics. As the Department of Defense's (DOD) largest combat support agency, providing worldwide logistics support in both peacetime and wartime, the Defense Logistics Agency (DLA) supplies almost every consumable item the military services need to operate, from Meals Ready-to Eat to jet fuel. Given current budgetary pressures and the crucial role DLA plays in supporting the military service in the United States and overseas, it is vital that DOD ensure DLA is getting value for the commodities and services it acquires. The committee asked GAO to identify the challenges DOD faces in ensuring DLA gets value for the taxpayer's dollar and obtains quality commodities in a cost-efficient and effective manner. This testimony focuses on sound practices GAO has identified regarding obtaining value when contracting and how they can also apply to DLA's acquisition of commodities. GAO has made numerous recommendations aimed at improving DOD's management and oversight of contractors, and DOD has concurred with many of them. GAO is not making any new recommendations in this testimony. DOD faces challenges ensuring DLA gets value for the taxpayer's dollar and obtains quality commodities in a cost-efficient and effective manner. GAO's previous testimonies before this committee on weapons system acquisition and service contracts highlighted how essential it is that DOD employ sound practices when using contractors to support its missions or operations to ensure the department receives value regardless of the type of product or service involved. These practices include clearly defining its requirements, using the appropriate contract type, and effectively overseeing contractors. With regard to DLA, GAO's prior work has identified the following challenge areas: (1) Accurate Requirements Definition - Without a good understanding of customers' projected needs, DLA is not assured it is buying the right items in the right quantities at the right time. GAO's prior work has identified instances where problems in properly defining requirements can lead to ineffective or inefficient management of commodities. For example, GAO reported in 2005 that while DLA had a model to forecast supply requirements for contingencies, this model did not produce an accurate demand forecast for all items, including Meals Ready-to-Eat. As a result, the demand for these items was underestimated and some combat support units came within a day or two of exhausting their Meals Ready-to-Eat rations. (2) Sound Business Arrangements - Selecting the appropriate type is important because certain contracting arrangements may increase the government's cost risk where others transfer some of that cost risk to the contractor. For example, GAO noted in 2007 that DLA's Defense Energy Support Center was able to purchase fuel and supply products for the forces in Iraq more cheaply than an Army Corps of Engineers contractor because DLA was able to sign long-term contracts with the fuel suppliers. (3) Proper Contract Oversight and Management - Failure to provide adequate contract oversight and management hinders DOD's ability to address poor contractor performance and avoid negative financial and operation impacts. For example, in June 2006, GAO found that DLA officials were not conducting required price reviews for the prime vendor contracts for food service equipment and construction and equipment commodities. Agency officials acknowledged that these problems occurred because management at the agency and supply center level were not providing adequate oversight to ensure that contracting personnel were monitoring prices. DLA has taken some actions to address these challenges. For example, DLA has begun adjusting acquisition strategies to reassign programs to a best procurement approach. DLA has also established contracting officer's representative training requirements to ensure these individuals are properly trained to carry out their responsibilities.
Drinking water and wastewater utilities are facing potentially significant investments over the next 20 years to upgrade an aging and deteriorated infrastructure, including underground pipelines, treatment, and storage facilities; meet new regulatory requirements; serve a growing population; and improve security. Adding to the problem is that many utilities have not been generating enough revenues from user charges and other local sources to cover their full cost of service. As a result, utilities have deferred maintenance and postponed needed capital improvements. To address these problems and help ensure that utilities can manage their needs cost-effectively, some water industry and government officials advocate the use of comprehensive asset management. Asset management is a systematic approach to managing capital assets in order to minimize costs over the useful life of the assets while maintaining adequate service to customers. While the approach is relatively new to the U.S. water industry, it has been used by water utilities in other countries for as long as 10 years. Each year, the federal government makes available billions of dollars to help local communities finance drinking water and wastewater infrastructure projects. Concerns about the condition of existing infrastructure have prompted calls to increase financial assistance and, at the same time, ensure that the federal government’s investment is protected. In recent years the Congress has been considering a number of proposals that would promote the use of comprehensive asset management by requiring utilities to develop and implement plans for maintaining, rehabilitating, and replacing capital assets, often as a condition of obtaining loans or other financial assistance. The federal government has had a significant impact on the nation’s drinking water and wastewater infrastructure by (1) providing financial assistance to build new facilities and (2) establishing regulatory requirements that affect the technology, maintenance, and operation of utility infrastructure. As we reported in 2001, nine federal agencies made available about $46.6 billion for capital improvements at water utilities from fiscal years 1991 through 2000. The Environmental Protection Agency (EPA) and the Department of Agriculture alone accounted for over 85 percent of the assistance, providing $26.4 billion and $13.3 billion, respectively, during the 10-year period; since then, the funding from these two agencies has totaled nearly $15 billion. EPA’s financial assistance is primarily in the form of grants to the states to capitalize the Drinking Water and Clean Water State Revolving Funds, which are used to finance improvements at local drinking water and wastewater treatment facilities, respectively. As part of the Rural Community Advancement Program, Agriculture’s Rural Utilities Service provides direct loans, loan guarantees, and grants to construct or improve drinking water, sanitary sewer, solid waste, and storm drainage facilities in rural communities. In addition to its financial investment, EPA has promulgated regulations to implement the Safe Drinking Water Act and Clean Water Act, which have been key factors in shaping utilities’ capital needs and management practices. For example, under the Safe Drinking Water Act, EPA has set standards for the quality of drinking water and identified effective technologies for treating contaminated water. Similarly, under the Clean Water Act, EPA has issued national minimum technology requirements for municipal wastewater utilities and criteria that states use to establish water quality standards that affect the level of pollutants that such utilities are permitted to discharge. Thus, the federal government has a major stake in protecting its existing investment in water infrastructure and ensuring that future investments go to utilities that are built and managed to meet key regulatory requirements. Drinking water and wastewater utilities will need to invest hundreds of billions of dollars in their capital infrastructure over the next two decades, according to EPA; the Congressional Budget Office; and the Water Infrastructure Network, a consortium of industry, municipal, state, and nonprofit associations. As table 1 shows, the projected needs range from $485 billion to nearly $1.2 trillion. The estimates vary considerably, depending on assumptions about the nature of existing capital stock, replacement rates, and financing costs. Given the magnitude of the projected needs, it is important that utilities adopt a strategy to manage the repair and replacement of key assets as cost-effectively as possible and to plan to sustain their infrastructure over the long term. Local drinking water and wastewater utilities rely primarily on revenues from user rates to pay for infrastructure improvements. According to EPA’s gap analysis, maintaining utility spending at current levels could result in a funding gap of up to $444 billion between projected infrastructure needs and available resources. However, EPA also estimates that if utilities’ infrastructure spending grows at a rate of 3 percent annually over and above inflation, the gap will narrow considerably and may even disappear. EPA’s report concludes that utilities will need to use some combination of increased spending and innovative management practices to meet the projected needs. The nation’s largest utilities—those serving populations of at least 10,000— account for most of the projected infrastructure needs. For example, according to EPA data, large drinking water systems represent about 7 percent of the total number of community water systems, but account for about 65 percent of the estimated infrastructure needs. Similarly, about 29 percent of the wastewater treatment and collection systems are estimated to serve populations of 10,000 or more, and such systems account for approximately 89 percent of projected infrastructure needs for wastewater utilities. Most of the U.S. population is served by large drinking water and wastewater utilities; for example, systems serving at least 10,000 people provide drinking water to over 80 percent of the population. Pipeline rehabilitation and replacement represents a significant portion of the projected infrastructure needs. According to the American Society of Civil Engineers, U.S. drinking water and wastewater utilities are responsible for an estimated 800,000 miles of water delivery pipelines and between 600,000 and 800,000 miles of sewer pipelines, respectively. According to the most recent EPA needs surveys, the investment needed for these pipelines from 1999 through 2019 could be as much as $137 billion. Several recent studies have raised concerns about the condition of the existing pipeline network. For example, in August 2002, we reported the results of a nationwide survey of large drinking water and wastewater utilities. Based on the survey, more than one-third of the utilities had 20 percent or more of their pipelines nearing the end of their useful life; and for 1 in 10 utilities, 50 percent or more of their pipelines were nearing the end of their useful life. In 2001, a major water industry association predicted that drinking water utilities will face significant repair and replacement costs over the next three decades, given the average life estimates for different types of pipelines and the years since their original installation. Other studies have made similar predictions for the pipelines owned by wastewater utilities. EPA and water industry officials cite a variety of factors that have played a role in the deterioration of utility infrastructure; most of these factors are linked to the officials’ belief that the level of ongoing investment in the infrastructure has not been sufficient to sustain it. For example, according to EPA’s Assistant Administrator for Water, the pipelines and plants that make up the nation’s water infrastructure are aging, and maintenance is too often deferred. He predicted that consumers will face sharply rising costs to repair and replace the infrastructure. Similarly, as the Water Environment Research Foundation reported in 2000, “years of reactive maintenance and minimal expenditures on sewers have left a huge backlog of repair and renewal work.” Our nationwide survey of large drinking water and wastewater utilities identified problems with the level of revenues generated from user rates and decisions on investing these revenues. For example: Many drinking water and wastewater utilities do not cover the full cost of service—including needed capital investments and operation and maintenance costs—through their user charges. Specifically, a significant percentage of the utilities serving populations of 10,000 or more—29 percent of the drinking water utilities and 41 percent of the wastewater utilities—were not generating enough revenue from user charges and other local sources to cover their costs. Many drinking water and wastewater utilities defer maintenance and needed capital improvements because of insufficient funding. About one-third of the utilities deferred maintenance expenditures in their most recent fiscal year; similar percentages of utilities reported deferring minor capital improvements and major capital improvements. About 20 percent of the utilities had deferred expenditures in all three categories. For many utilities, a significant disparity exists between the actual rehabilitation and replacement of their pipelines and the rate at which utility managers believe rehabilitation and replacement should occur. We found that only about 40 percent of the drinking water utilities and 35 percent of the wastewater utilities met or exceeded their desired rate of pipeline rehabilitation and replacement. The remaining utilities did not meet their desired rates. Roughly half of the utilities actually rehabilitated or replaced 1 percent or less of their pipelines annually. Utility managers also lack the information they need to manage their existing capital assets. According to our survey, many drinking water and wastewater utilities either do not have plans for managing their assets or have plans that may not be adequate in scope or content. Specifically, nearly one-third of the utilities did not have plans for managing their existing capital assets. Moreover, for the utilities that did have such plans, the plans in many instances did not cover all assets or did not contain one or more key elements, such as an inventory of assets, assessment criteria, information on the assets’ condition, and the planned and actual expenditures to maintain the assets. Comprehensive asset management has gained increasing recognition within the water industry as an approach that could give utilities the information and analytical tools they need to manage existing assets more effectively and plan for future needs. Using asset management concepts, utilities and other organizations responsible for managing capital infrastructure can minimize the total cost of designing, acquiring, operating, maintaining, replacing, and disposing of capital assets over their useful lives, while achieving desired service levels. Figure 1 shows some of the basic elements of comprehensive asset management and how the elements build on and complement each other to form an integrated management system. Experts within and outside the water industry have published manuals and handbooks on asset management practices and how to apply them. While the specific terminology differs, some fundamental elements of implementing asset management appear consistently in the literature. Collecting and organizing detailed information on assets. Collecting basic information about capital assets helps managers identify their infrastructure needs and make informed decisions about the assets. An inventory of an organization’s existing assets generally should include (1) descriptive information about the assets, including their age, size, construction materials, location, and installation date; (2) an assessment of the assets’ condition, along with key information on operating, maintenance, and repair history, and the assets’ expected and remaining useful life; and (3) information on the assets’ value, including historical cost, depreciated value, and replacement cost. Analyzing data to set priorities and make better decisions about assets. Under asset management, managers apply analytical techniques to identify significant patterns or trends in the data they have collected on capital assets; help assess risks and set priorities; and optimize decisions on maintenance, repair, and replacement of the assets. For example: Life-cycle cost analysis. Managers analyze life-cycle costs to decide which assets to buy, considering total costs over an asset’s life, not just the initial purchase price. Thus, when evaluating investment alternatives, managers also consider differences in installation cost, operating efficiency, frequency of maintenance and repairs, and other factors to get a cradle-to-grave picture of asset costs. Risk/criticality assessment. Managers use risk assessment to determine how critical the assets are to their operations, considering both the likelihood that an asset will fail and the consequences—in terms of costs and impact on the organization’s desired level of service—if the asset does fail. Based on this analysis, managers set priorities and target their resources accordingly. Integrating data and decision making across the organization. Managers ensure that the information collected within an organization is consistent and organized so that it is accessible to the people who need it. Among other things, the organization’s databases should be fully integrated; for instance, financial and engineering data should be compatible, and ideally each asset should have a unique identifier that is used throughout the organization. Regarding decision making, all appropriate units within an organization should participate in key decisions, which ensures that all relevant information gets considered and encourages managers to take an organizationwide view when setting goals and priorities. Linking strategy for addressing infrastructure needs to service goals, operating budgets, and capital improvement plans. An organization’s goals for its desired level of service—in terms of product quality standards, frequency of service disruptions, customer response time, or other measures—are a major consideration in the organization’s strategy for managing its assets. As managers identify and rank their infrastructure needs, they determine the types and amount of investments needed to meet the service goals. Decisions on asset maintenance, rehabilitation, and replacement are, in turn, linked to the organization’s short- and long-term financial needs and are reflected in the operating budget and capital improvement plan, as appropriate. Implementing the basic elements of asset management is an iterative process that individual organizations may begin at different points. Within the water industry, for example, some utilities may start out by identifying their infrastructure needs, while other utilities may take their first step by setting goals for the level of service they want to provide. The interrelationship between the elements of asset management can alter an organization’s strategy for managing its assets. For example, once an organization has completed a risk assessment, it may scale back its efforts to compile a detailed inventory of assets to focus initially on those assets determined to be critical. Similarly, as information on infrastructure needs and priorities improves, managers reexamine the level of planned investments, considering the impact on both revenue requirements and the level of service that can be achieved. According to advocates of asset management, while many organizations are implementing certain aspects of the process, such as maintaining an inventory of assets and tracking maintenance, these organizations are not realizing the full potential of comprehensive asset management unless all of the basic elements work together as an integrated management system. As the description of asset management indicates, implementing this approach is not a step-by-step, linear process. Asset management is an integrated system that utilities and other organizations can implement in a number of different ways, depending on what makes sense for their particular organization. In the United States, some drinking water and wastewater utilities, for example, are taking a more strategic approach, initially investing their resources in planning for asset management. Other utilities are focusing initially on collecting data. Another variation is that some utilities are adopting asset management on a utilitywide basis, while others are piloting the approach at a single facility or department or are targeting critical assets utilitywide. The level of sophistication with which asset management concepts are applied within a utility can also vary, depending on the size and complexity of the operations and the resources that the utility can devote to implementation. Comprehensive asset management is a relatively new concept for drinking water and wastewater utilities in the United States. According to EPA and major water industry organizations, few utilities are implementing comprehensive asset management, and those that have done so are almost exclusively larger entities. In addition, for the most part, the domestic utilities that have adopted asset management are in the early stages of implementation. Few utilities have been involved in the process for longer than 2 to 3 years. Although relatively new to the U.S. water industry, comprehensive asset management has been used for about 10 years by water utilities in Australia and New Zealand, where the national governments have strongly endorsed the concept. In each case, the driving force behind the use of asset management was legislation that called for water utilities to improve their financial management. In Australia, the law requires utilities to recover the full cost of service, while in New Zealand the law requires utilities to depreciate their assets annually and use cost-benefit analysis, among other things. The national governments of Australia and New Zealand each published guidebooks on asset management, and engineering groups in the two countries jointly developed a comprehensive manual on managing infrastructure assets. Asset management is seen as a means of improving utility infrastructure elsewhere in the world. For example, in the United Kingdom, utilities must develop asset management plans that identify the level of investment required to maintain and improve capital assets every 5 years; annual audits help ensure that planned improvements are made. Similarly, in 2002, the legislature in Ontario, Canada enacted a law requiring municipalities to develop plans for recovering the full cost of service to ensure that drinking water and wastewater systems are adequately funded. The Ranking Minority Member, Senate Committee on Environment and Public Works, asked us to examine the use of comprehensive asset management at drinking water and wastewater utilities in the United States. This report examines (1) the potential benefits of asset management for water utilities and the challenges that could hinder its implementation and (2) the role that the federal government might play in encouraging utilities to implement comprehensive asset management. To conduct our work, we reviewed relevant studies, handbooks, training materials, and other documents related to comprehensive asset management and its implementation, particularly for managing the infrastructure at drinking water and wastewater utilities. At the federal level, we obtained information from EPA’s Office of Ground Water and Drinking Water and Office of Wastewater Management, the offices that, along with the states, are responsible for overseeing drinking water and wastewater utilities. We also obtained information on other federal agencies with experience in asset management, predominantly the Federal Highway Administration in the U.S. Department of Transportation, and financial standards promulgated by the Governmental Accounting Standards Board. For site-specific information, our review included over 50 individual utilities from the United States, Australia, and New Zealand— including 15 U.S. utilities at which we conducted structured interviews. Other sources of information included the following: state associations, including the Association of State Drinking Water Administrators and the Association of State and Interstate Water Pollution Control Administrators; major industry groups, including the American Public Works Association, American Water Works Association, Association of Metropolitan Sewerage Agencies, Association of Metropolitan Water Agencies, National Association of Water Companies, National Rural Water Association, Water Environment Federation, and Water Services Association of Australia; engineering and consulting firms with experience in helping utilities implement asset management, including Brown and Caldwell; CH2M Hill; Metcalf and Eddy, Inc.; Municipal and Financial Services Group; PA Consulting Group; and Parsons Corporation in the U.S.; GHD Pty. Ltd. in Australia; and Meritec in New Zealand; several state and regional regulatory agencies in Australia and New EPA-funded state and university-based training and technical assistance centers. To obtain information on the benefits and challenges of asset management, we conducted initial interviews with 46 domestic drinking water and wastewater utilities that knowledgeable government and water industry officials identified as implementing comprehensive asset management. To obtain more detailed information, we conducted structured interviews with officials from 15 of the 46 utilities. We selected the 15 utilities based on two criteria: (1) they reported or anticipated achieving quantitative benefits from asset management or (2) they represented smaller entities. (See app. I for a list of the 15 utilities we selected for structured interviews.) In total, 12 of the 15 utilities were relatively large, serving populations ranging from 300,000 to 2,500,000; the remaining three were significantly smaller, serving populations ranging from 3,000 to 67,100. Because of the small number of utilities that we interviewed in depth and the way in which they were selected, our results are not generalizable to the larger universe of domestic drinking water and wastewater utilities. Because of the utilities’ limited experience in implementing asset management, we supplemented the information obtained from domestic utilities with information from six utilities and five government agencies in Australia and New Zealand, two countries that have taken the lead in implementing comprehensive asset management. (See app. II for a list of the utilities and government agencies we contacted in Australia and New Zealand.) Outside the water industry, we consulted with the Private Sector Council, which identified two companies—The Gillette Company and SBC Communications, Inc.—with long-standing experience in using comprehensive asset management in their respective fields. We interviewed officials from these companies to obtain their perspectives on the benefits and challenges of implementing asset management. For information on the potential federal role in promoting asset management at water utilities, we obtained information from EPA’s Office of the Chief Financial Officer, Office of Ground Water and Drinking Water, and Office of Wastewater Management on the activities that EPA is currently sponsoring, including the development of informational materials on asset management; activities by EPA-funded, state and university-based training and technical assistance centers; and various studies and research projects. We also discussed options for a federal role in promoting asset management with officials from water industry associations, EPA, and the 15 utilities selected for structured interviews. In addition, with the help of organizations and officials experienced in asset management, we identified the U.S. Department of Transportation as being at the forefront of federal involvement in this issue. We obtained and reviewed information about the department’s initiatives from the Office of Asset Management within the Federal Highway Administration. We conducted our work between March 2003 and March 2004 in accordance with generally accepted government auditing standards. We provided a draft of this report to EPA for review and comment. We received comments from officials within EPA’s Office of Water and Office of the Chief Financial Officer, who generally agreed with the information presented in the report and our recommendations. They further noted that while EPA has played a major role in bringing asset management practices to the water industry, significant additional activity could be undertaken, and they have placed a high priority on initiating activities similar to those we suggested. The officials also made technical comments, which we incorporated as appropriate. While comprehensive asset management is relatively new to most drinking water and wastewater utilities in the United States, some utilities say they have already benefited from this approach and have also encountered certain challenges. The utilities reported benefiting from (1) improved decision making because they have better information about their capital assets and (2) improved relationships with governing authorities, ratepayers, and other stakeholders because they are better able to communicate information on infrastructure needs and improvement plans. While water industry officials identified benefits associated with comprehensive asset management, we found that reported savings should be viewed with caution. Among the challenges of implementing asset management, utility officials cited the difficulty of (1) collecting the appropriate data and managing it efficiently and (2) making the cultural changes necessary to integrate information and decision making across departments. In addition, the officials reported that the short-term budget and election cycles typical of utility governing bodies make it difficult to meet the long-term capital investment planning needs of asset management. Although smaller utilities face more obstacles to implementing asset management than larger utilities, principally because of limited resources, they can also benefit from applying asset management concepts. U.S. utilities expect to reap significant benefits from the data they collect, analyze, and share through an asset management approach. With these data, utilities expect to make more informed decisions on maintaining, rehabilitating, and replacing their assets, thereby making their operations more efficient. Utilities can also use these data to better communicate with their governing bodies and the public, which should help them to make a sound case when seeking rate increases. Although water industry officials identified financial and other benefits from using asset management, reported savings should be viewed with caution because, for instance, comprehensive asset management may be implemented concurrently with other changes in management practices or operational savings may be offset by increases in capital expenditures. Collecting, sharing, and analyzing data through comprehensive asset management can help utilities to make more informed decisions about maintaining, rehabilitating, and replacing their assets. In particular, utilities can use the information collected and analyzed to prevent problems and allocate their maintenance resources more effectively. For example: Better information enabled the Massachusetts Water Resources Authority to improve its maintenance decisions and eliminate some unneeded maintenance activities. For example, in an effort to optimize maintenance practices in one of their treatment plants, utility officials reassessed maintenance practices for 12 equipment systems, such as different types of pumps. By using the assessment results to improve maintenance planning for these assets, the utility decreased the labor hours spent on preventive maintenance by 25 percent from the hours recommended by the original equipment manufacturers, according to utility officials. Similarly, in analyzing its maintenance practices, the Massachusetts Water Resources Authority found it was lubricating some equipment more often than necessary. By decreasing the frequency of oil changes, the utility reported it saved approximately $20,000 in oil purchase and disposal costs. In addition, the utility extended the life of its assets by decreasing the lubrication—over-lubrication can cause equipment parts to fail prematurely. Seattle Public Utilities used asset management to better target its maintenance resources. As part of the utility’s asset management strategy, officials used a risk management approach, calculating the likelihood and impact of a rupture for the utility’s sewer and drainage pipes. To determine the likelihood of rupture, officials considered such factors as a pipe’s age, material, and proximity to a historical landfill or steep slope. To determine the impact of a rupture, they examined factors such as a pipe’s size, location, and historical cost of repair. As a result of this analysis, utility officials identified 15 percent of their pipes as high risk, or “critical”—such as larger, older pipes located beneath downtown Seattle. They shifted resources to maintain and rehabilitate these pipes. The officials considered the remaining 85 percent of pipes as noncritical, or, lower risk, because their failure was less likely or because a breakage would affect a limited number of customers, be repaired relatively quickly, and require minimal resources. For these pipes, the utility decided not to perform any preventive maintenance activities, only making repairs as needed. By taking this approach, utility officials believe they are using their staff resources more efficiently and that, over time, they will reduce their maintenance costs. Comprehensive asset management also helps managers to make more informed decisions about whether to rehabilitate or replace assets, and once they decide on replacement, to make better capital investment decisions. For example: According to utility managers at the Louisville Water Company, the utility developed its Pipe Evaluation Model in the early 1990s as a tool for ranking its 3,300 miles of aging pipes and water mains for rehabilitation and replacement. The pipe program includes many of the key principles and practices of comprehensive asset management: for instance, it integrated data about the age of the pipes with data about their maintenance history. In analyzing this information, managers discovered that two vintages of pipes—those built between 1862 and 1865 and between 1926 and 1931—had the highest number of breaks per 100 miles of pipeline. Consequently, they decided to replace the pipes from those two periods. The model also showed that pipes installed between 1866 and 1925 were fairly reliable, thus these pipes were targeted for rehabilitation rather than replacement. The utility is lining the interior of these pipes with cement, which is expected to extend their life by about 40 years. Furthermore, utility managers told us that their pipe model and other practices that use asset management principles have helped reduce the frequency of water main breaks from 26 to 22.7 per hundred miles and the frequency of leaks from joints from 8.2 to 5.6 per hundred miles. In implementing its asset management approach, managers at the Sacramento Regional County Sanitation District reassessed a proposed investment in new wastewater treatment tanks and decided on a less expensive option, thereby saving the utility approximately $12 million. During this reassessment, managers found that increasing preventive maintenance on existing tanks would lower the risk of shutdown more cost-effectively than adding a new set of tanks. Utility officials commented that their implementation of asset management helped change their decision-making process by, among other things, bringing together staff from different departments to ensure more complete information, and more effectively using the data to understand investment options. As a part of its asset management strategy, Seattle Public Utilities established an asset management committee, comprised of senior management from various departments, to ensure appropriate decision making about the utility’s capital improvement projects. For every capital improvement project with an expected cost over $250,000, project managers must submit a plan to the committee that (1) defines the problem to be solved, (2) examines project alternatives, (3) estimates the life-cycle costs of the alternatives, (4) analyzes the possible risks associated with the project, and (5) recommends an alternative. According to utility officials, implementing this process has led to deferring, eliminating, or altering several capital improvement projects, and contributing to a reduction in the utility’s 2004 capital improvement project budget for water of more than 8 percent. For instance, after drafting new water pressure standards, the utility eliminated the need for some new water mains. It developed an alternative plan to provide more localized solutions to increase water pressure, resulting in expected savings of $3 million. In another case, the utility reassessed alternatives to replacing a sewer line located on a deteriorating trestle, ultimately opting to restore and maintain the existing wood trestle and make spot repairs to the sewer line, which resulted in an estimated savings of $1.3 million. Finally, comprehensive asset management helps utilities share information across departments and coordinate planning and decision making. In this way, utility managers can reduce duplication of efforts and improve the allocation of staff time and other resources. For example, managers at Eastern Municipal Water District used asset management to improve their business practices, which they saw as compartmentalized and inefficient. In one instance, they examined their decentralized maintenance activities. The utility had two maintenance crews who worked throughout the system, in different shifts and reported to managers at four different facilities. In addition, the utility’s work order system was inefficient; for example, when different crew members independently reported the same maintenance need, managers did not notice the duplication because the problem was described in different terms (e.g., as a “breaker failure” by one crew member and as a “pump failure” by another). Finally, in some instances, work crews would arrive at a site only to find that needed maintenance work had already been completed. To improve the system, utility officials (1) centralized maintenance by making one person responsible for scrutinizing and setting priorities for all work orders and (2) established a standardized classification of assets, which helped maintenance staff use the same terminology when preparing work orders. Utility officials report that taking these steps allowed them to identify and eliminate work orders that were unnecessary, already completed, or duplicates, which ultimately reduced their maintenance work backlog by 50 percent. The private sector companies we visited agreed that using a comprehensive asset management approach improved their decision making. Specifically, by improving their data, analyzing these data, and centralizing management decision making, managers at SBC Communications, Inc., reported that they have made better capital investment decisions and allocated resources more efficiently. Managers at The Gillette Company reported that they consider life-cycle costs and other factors to assess investment alternatives and, ultimately, make better investment decisions. The utilities we contacted reported that comprehensive asset management also benefits their relations with external stakeholders by (1) making a sound case for rate increases to local governing bodies and ratepayers; (2) improving their bond rating with credit rating agencies, and (3) better demonstrating compliance with federal and state regulations. Some utilities have used, or expect to use, the information collected through comprehensive asset management to persuade elected officials to invest in drinking water and wastewater infrastructure through rate increases. For example, the Louisville Water Company reported that in the early 1990s it used the asset information it had gathered and analyzed to convince its local governing board that its current rates would not cover its expected costs and that the utility needed a rate increase to cover its anticipated rehabilitation and replacement needs. The board approved a set-aside of $600,000 for an infrastructure rehabilitation and replacement fund as a part of the requested rate increase in 1993, and, according to one utility official, has been supportive of including funds for asset rehabilitation and replacement as a part of rate requests since then. Furthermore, the utility manager requested that the amount of the set-aside gradually increase to $3 million over the next 5 years. According to this official, the board not only approved this request, it also increased the rates to support the fund sooner than the utility manager had requested. According to several other utilities that have begun to implement comprehensive asset management, this approach should enable them to justify needed rate increases from their governing bodies. Similarly, Australian and New Zealand officials we interviewed stated that the data from asset management helps utilities make a more credible case for rate increases from their governing bodies. Utility managers can also use the information they provide to their governing boards as a basis for evaluating and deciding on trade-offs between service levels and rates. For example, according to an official at South Australian Water Corporation, using asset management practices, he was able to suggest a range of funding alternatives to the utility’s governing body. The utility managers conducted statistical modeling on the asset information they collected (e.g., pipe performance history and financial information) and, using this analysis, predicted the approximate number of pipe breaks at various levels of funding. Understanding the trade-offs between lower rates and higher numbers of pipe breaks, the governing body could make an informed decision about what the appropriate level of service was for their community. Comprehensive asset management also has the potential to improve a utility’s bond rating, a benefit that translates into savings through lower interest rates on loans and bonds. When deciding on a utility’s bond rating, credit rating agencies consider criteria related to comprehensive asset management, such as the utility’s management strategies and its planning for asset replacement. For example, according to a representative from one credit rating agency, asset management shows that a utility is considering future costs. He would therefore expect a utility with an asset management plan that looks at future capital and operating costs and revenues to receive a higher bond rating than a utility that does not sufficiently consider those future needs, even if that utility has a better economy and a higher tax base. Some local officials believe that comprehensive asset management played a role in the bond ratings they received, or will do so in the future. For example, the finance director of the small northeastern city of Saco, Maine, told us that she believes that the city’s decision to use asset management practices—such as maintaining an up-to-date asset inventory, periodically assessing the condition of the assets, and estimating the funds necessary to maintain the assets at an acceptable level each year—contributed to the credit rating agencies’ decision to increase the city’s bond rating, which resulted in an expected savings of $2 million over a 20-year period. Similarly, a utility official at Louisville Water Company told us that asset management practices, such as strategically planning for the rehabilitation and replacement of its aging assets, helps the utility maintain its strong bond rating. According to several utility managers we interviewed, comprehensive asset management can be used to help comply with regulations. For example: Comprehensive asset management practices played a role in improving their utility’s compliance with existing regulations. Specifically, among other things, asset management practices such as identifying and maintaining key assets led to fewer violations of pollutant discharge limitations under the Clean Water Act. At Western Carolina Regional Sewer Authority, for instance, the number of these violations decreased from 327 in 1998 (about the time that the utility began implementing asset management) to 32 violations in 2003. At the Charleston Commissioners of Public Works, utility officials told us that if they had not had asset management in place it would be difficult to meet the rehabilitation program and maintenance program elements of EPA’s draft capacity, management, operation, and maintenance regulations for wastewater utilities. For instance, the draft regulations would require that wastewater utilities identify and implement rehabilitation actions to address structural deficiencies. Because the utility has implemented asset management practices, such as assessing the condition of its pipes and identifying those most in need of rehabilitation, it can better target its resources to rehabilitate pipes in the worst condition, and, in the process, meet the proposed standards for rehabilitation. Many of the U.S. utilities we interviewed were still in the early stages of implementing asset management and most had not measured financial savings. However, many water industry officials expect asset management to result in overall cost savings. Specifically, several officials told us they expect that asset management will slow the rate of growth of utilities’ capital, operations, and maintenance costs over the coming years. Nevertheless, total costs will rise because of the need to replace and rehabilitate aging infrastructure. At least one U.S. utility has estimated the overall savings it will achieve using comprehensive asset management. Specifically, an engineering firm projected that asset management would reduce life-cycle costs for the Orange County Sanitation District by about $350 million over a 25-year period. Among other data, the engineering firm used the utility’s available operating expenditure information (operations, maintenance, administration, and depreciation data) and capital improvement program expenditures (growth/capacity, renewal/replacement, and level of support data) to model the projected life-cycle cost savings. Additionally, some of the Australian utilities we interviewed reported financial savings. For example, officials at Hunter Water Corporation reported significant savings in real terms between fiscal years 1990 and 2001: a 37 percent reduction in operating costs; improved service standards for customers, as measured by such factors as water quality and the number of sewer overflows; and a reduction of more than 30 percent in water rates for customers. Hunter Water officials believe that they achieved these efficiencies as a result of asset management. Though utility officials have made some attempts to quantify the impact of asset management, they also cited reasons for exercising caution in interpreting reported savings and other benefits. First, benefits such as operating cost reductions should not be considered in isolation of other utility costs. A utility cannot consider reductions in operating costs a net benefit if, for instance, savings in operational costs are offset by an increase in the utility’s capital expenditures. Furthermore, reductions in operating costs may be caused by increases in capital expenditures because, for example, newer assets may require less maintenance and fewer repairs. In the case of the Hunter Water Corporation, the utility’s capital expenditures were at about the same level in 2001 as in 1991, despite some fluctuation over the period. Second, other factors might have contributed to financial and other benefits. For example, a utility may be implementing other management initiatives concurrently with asset management and may not be able to distinguish the benefits of the various initiatives. In addition to using an asset management approach, for instance, some U.S. utilities we interviewed used an environmental management system, which shares some of the same components as asset management. Some of these utilities told us that they could not separate the benefits of asset management from those achieved as a result of their environmental management systems. In addition, reported savings from asset management can be misleading without complete information on how the savings estimates are derived. For example, a widely distributed graph shows an estimated 15 percent to 40 percent savings in life-cycle costs for 15 wastewater utilities in Australia. EPA and others used the graph as a basis for projecting savings for U.S. utilities. However, the graph was mislabeled at some point—the reported reductions in life-cycle costs were actually reductions in operating costs. As we have already noted, operating costs reductions alone do not provide enough information to determine the net benefit of implementing asset management. Despite the acknowledged benefits of comprehensive asset management, utilities face three key challenges that may make implementing this approach difficult. First, to determine the condition of current assets and the need for future investment, utilities have to gather and integrate complete and accurate data, which may require significant resources. Second, successful implementation requires cultural change—departments long accustomed to working independently must be willing to coordinate and share information. Finally, utilities may find that their efforts to focus on long-term planning conflict with the short-term priorities of their governing bodies. These three challenges may be more difficult for smaller utilities because they have fewer financial, staff, and technical resources. The difficulties utilities experience gathering data to implement asset management depend on the (1) condition of their existing data, (2) ability to coordinate existing data across departments, (3) need to upgrade technology, and (4) ability to sustain complete and accurate data. One industry official noted that larger utilities, in particular, may have a more difficult time gathering and coordinating data because they typically possess a substantial number of assets. Nevertheless, utility officials and water association representatives agree that utilities should not allow these data challenges to prevent them from implementing asset management. These officials emphasized that utilities should begin implementing asset management by using the data they already possess, continuing data collection as they perform their routine repair and maintenance activities, or focusing data collection efforts on their most critical assets. Domestic and international water officials emphasize the importance of obtaining, integrating, and sustaining good data for decision making. This is no small challenge. According to the Association of Metropolitan Sewerage Agencies and the International Infrastructure Management Manual, utilities generally need the following types of data to begin implementing asset management: age, condition, and location of the assets; asset size and/or capacity; valuation data (e.g., original and replacement cost); installation date and expected service life; maintenance and performance history; and construction materials and recommended maintenance practices. According to utility officials and industry handbooks, utilities sometimes have incomplete or inaccurate historical data about their assets. For example: An official at the Augusta County Service Authority noted that the utility did not possess a great deal of detailed historical data about its assets. For example, its asset ledger would indicate that “a pump station was installed at a particular location in 1967,” but would not provide any additional information about the assets, such as the individual components that make up this system. Similarly, the official told us that the utility’s prior billing system did not maintain historical data about its customers’ water usage rates. As a result, the management team found it difficult to adequately forecast their needed rate increases because they lacked historical information about water consumption. According to an East Bay Municipal Utility District official, the utility lacked detailed maintenance data on its assets before 1990 because maintenance workers had not consistently reported repairs to a central office. Given these problems, utility managers may have to invest a significant amount of time and resources to gather necessary data, particularly data about the condition of their thousands of miles of buried pipelines. Understandably, utilities are unwilling to dig up their pipelines to gather missing data. However, utilities may be able to derive some information about the condition of these pipes to the extent they have information on the pipes’ age, construction material, and maintenance history. In addition, utilities may choose to align their data collection with their ongoing maintenance and replacement activities. These approaches, however, may require new technology, which may mean a financial investment. For example: Tacoma Water equipped its staff with laptop computers, which allows them to access their geographic information system—software that can track where assets are located—while they are in the field. As the staff perform their routine repair and rehabilitation activities, they can record and update data about an asset’s condition, performance, and maintenance history. Similarly, the Department of Public Works in Billerica, Massachusetts, provided its field staff with handheld electronic devices programmed with a simple data collection template, which allows its staff to more accurately record information about its assets and their condition. Consequently, the field staff can enter more accurate information about the utility’s assets into its central asset inventory. Utilities also reported difficulty collecting and applying information about the manufacturer’s recommended techniques for optimizing their maintenance practices for their assets. Since no central clearinghouse of information on optimal maintenance practices is readily available, these utilities have had to invest their own time and resources to develop this information. For example: According to an official at Des Moines Water Works, the utility discovered that the manufacturer’s recommended maintenance practices often conflicted with the utility’s experience with the same asset. This official pointed out that the manufacturer’s estimate for maintenance was always higher than the utility’s experience. Given these inconsistencies, the official noted, all utilities would benefit from the development of a central industry clearinghouse that provided information about the recommended maintenance practices for certain assets. Similarly, an official at East Bay Municipal Utility District noted a significant difference between the manufacturer’s recommended maintenance practices and the utility’s experience with optimized maintenance. As a result, the utility has invested a significant amount of time in developing optimal maintenance practices for its assets and minimizing the risk of asset failure. While utilities need complete and accurate data for decision making, they also need to balance data collection with data management. Utilities may fall prey to data overload—collecting more data than they have the capacity to manage. For example, according to an official at the Augusta County Service Authority, while the utility has collected extensive infrastructure data, it has not invested enough of its resources into making these data useful for decision making. This official told us that utilities need to develop a data management strategy that identifies the types of data they need and the uses of these data for decision making. Without such a strategy, utilities gathering data will reach a point of diminishing returns. According to an official at the National Asset Management Steering Group in New Zealand, utilities should begin to implement asset management by identifying their critical assets and targeting their data- gathering activities toward the critical information they need in order to make decisions about these assets. An official also recommended that utilities begin implementation by using their existing data—even though the data may not be completely accurate—and refine this information as they improve and standardize their data collection processes. According to utility officials, coordinating data can be difficult because the data come from several different departments and from different sources within the departments. Furthermore, one industry handbook notes that a utility’s departments typically maintain different types of data about the same assets, which are formatted and categorized to meet each department’s individual needs and objectives. For example, the finance department may record an asset’s size in terms of square footage, while the engineering department may define an asset’s size in terms of pipeline diameter. Utilities adopting asset management need to coordinate these data to develop a central asset inventory. Table 2 shows the typical sources of data for a central inventory. Utility managers told us it was challenging to develop a standard data format for their central asset inventories. For example: As previously noted, Eastern Municipal Water District’s work order system was inefficient because crew members from different facilities did not use the same terms in describing maintenance problems. To eliminate these inefficiencies, the utility invested a great deal of time and resources to standardize its terms and asset classification and implement a computerized maintenance management system. According to a Louisville Water Company official, improving and validating the utility’s data was a challenge. Over the years, the utility has acquired between 12 and 20 smaller utilities. Each of these smaller utilities maintained its own asset data, which were not always reliable or maintained in the same format. The utility invested a great deal of time to validate these data and coordinate them into its central asset inventory. Similarly, according to an official at the South Australian Water Corporation, developing a central asset inventory was particularly difficult because each of the utility’s departments used different terms to refer to the same asset. The utility refined its data collection practices by training its employees on how to record data in a standard format. The utility officials we spoke to also had to address problems in coordinating data maintained in different and incompatible software programs. A Water Environment Research Foundation survey of utility managers, regulators, and industry consultants cited developing an asset information management system that meets the needs of all users as the most difficult element of asset management to implement. Without an integrated information management system, utilities found it difficult to develop data for decision making, and they found that they had to invest time and money to enter these data into a central database. For example: According to a Greater Cincinnati Water Works official, the utility wanted to integrate information about its assets’ location and maintenance history to efficiently dispatch staff to repair sites. However, the data for this report were stored in two separate and incompatible computer systems. To produce this information, the utility needed to re-enter the relevant data from each of these systems into a central asset database. Similarly, an official at Melbourne Water Corporation said that as his utility began to adopt asset management, it realized that it maintained relevant data in different computer systems, such as its computerized maintenance management system and its geographic information system. To address this fragmentation, the utility had to assign staff to consolidating its data into a central database to allow for easy integration. As utilities coordinate their data systems, they may need to upgrade their existing technology, which can represent a significant financial investment. For example, Augusta County Service Authority has requested $100,000 to purchase data integration software, which would allow it to coordinate information from several different computer systems. However, as of September 2003, this request had not been approved, in part because the software may not directly affect the utility’s profits or improve its service, making the governing body reluctant to finance the purchase. Similarly, St. Paul Regional Water Services recognized that it would need to purchase a geographic information system as the basis for integrating all departments’ data. However, the official noted that the utility could not purchase this system for another 4 years because it would cost several million dollars to purchase the system, enter data, and train its staff to operate the new system. As utilities continue to obtain and integrate data, they still face the challenge of maintaining complete and accurate data about their assets. The International Infrastructure Management Manual notes that data collection is a continuous process and that utilities need to remain consistent in gathering data and updating their central asset inventory as they repair, replace, or add infrastructure. Regular updating ensures that the information remains useful over time. To sustain the benefits garnered from its efforts to compile an accurate inventory, the Eastern Municipal Water District adopted a policy whereby employees must document changes to the inventory whenever assets are added, repaired, or removed. The utility has also developed methods to enforce its policy to make sure that the inventory is updated as required. According to industry officials, one of the major challenges to implementing asset management is changing the way utilities typically operate—in separate departments that do not regularly exchange information. It is essential to change this management culture, these officials believe, to encourage interdepartmental coordination and information sharing. To encourage interdepartmental communication, utilities may have to train their employees in using the resources of other departments. For example, at the Orange County Sanitation District, the management team found it difficult to demonstrate to its employees that their job responsibilities do indeed affect the functions of the other departments. The utility’s field staff possesses extensive information about the condition and performance of assets because they maintain these assets every day. However, these employees did not understand that the engineering department needs feedback on how the assets that the engineering department constructed are performing in the field. Such feedback could change future designs for these assets to improve their performance. As the utility implemented asset management, it established a work group to examine the conditions of asset failure, which provided a forum for the maintenance and engineering departments to collaborate. While this work group is still ongoing, one utility official noted that collaboration between these two departments will result in more efficient maintenance schedules for the utility’s assets. Similarly, the Eastern Municipal Water District reported that its middle- management team resisted some of the asset management changes because they believed these changes would limit their authority to manage their staff and workload. Before asset management, the utility maintained four different treatment facilities, each with its own maintenance staff. The utility believed that it could optimize its maintenance resources by combining all of the maintenance activities and staff at the four plants under one department. However, the managers at these treatment plants were reluctant to relinquish managerial control over their maintenance staff and feared that their equipment would be neglected. Once the new maintenance department was formed, however, these plant managers realized that centralizing these functions resulted in faster maintenance because the larger team could more effectively allocate time among the four facilities. In some instances, utility employees may be reluctant to accept comprehensive asset management because it requires them to take on additional responsibilities when they are already pressed for time in their “day jobs.” Additional time may indeed be necessary. According to officials at different utilities we visited, asset management requires staff throughout the organization to attend a variety of training programs— introductory, refresher, and targeted training by function or job—to ensure that they understand the value of asset management to both their own jobs and the operation of the utility. While asset management provides utilities with information to justify needed rate increases, their justifications may not be effective because their governing body and their customers want to keep rates low. According to utility officials, governing bodies’ reluctance to increase rates may be linked to constituent pressure to hold down user rates. In 2002, we reported that 29 percent of drinking water and 41 percent of wastewater utilities serving populations over 10,000 did not cover their full cost of service through user rates in their most recent fiscal year. Furthermore, about half of these utilities did not regularly increase their user rates; rather, they raised their user rates infrequently—once, twice, or not at all— from 1992 to 2001. Utility officials and water industry organizations also note that utilities may have to respond to governing bodies’ interests rather than to the long-term plan they developed using comprehensive asset management. For instance, while the Orange County Sanitation District’s governing board has supported comprehensive asset management, it overrode utility plans for some capital projects and instead funded a $500 million secondary sewage treatment plant, which was not a utility priority. The board took this action in response to public concerns that the operating sewage plant was inadequate and had contaminated the water. A subsequent report showed, however, that the contamination more than likely did not result from an inadequate treatment plant. However, the utility will probably have to defer other priorities in order to design and build this new facility. In addition, the governing body may shift funding originally budgeted to implement the next phase of Orange County’s asset management program to fund the new plant. Several industry officials also pointed out that governing bodies for municipally owned utilities tend to make financial decisions about their drinking water and wastewater utilities in light of competing local needs that may be a higher priority for the electorate. One industry official also reported that locally elected officials tend to focus their efforts on short- term, more visible projects, while utility managers must focus on sustaining the utility’s operation in the long term. For example, a utility’s governing body may decide to forgo infrastructure repairs in order to build a new school or baseball field. Smaller utilities can also benefit from the improved data, coordination, and informed decision making that result from asset management. Although small utilities represent a substantial portion of the water and wastewater industry, officials recognize that these utilities may have more difficulty implementing asset management because they typically have fewer financial, technological, and staff resources. In addition, EPA has reported that small systems are less likely to cover their full cost of providing services because they have to spread their fixed infrastructure costs over a smaller customer base. However, EPA believes that comprehensive asset management will enable smaller systems to increase knowledge of their system, make more informed financial decisions, reduce emergency repairs, and set better priorities for rehabilitation and replacement. Even the most rudimentary aspects of asset management can produce immediate benefits for small communities. For example, the Somersworth, New Hampshire, Department of Public Works and Utilities avoided a ruptured sewer main because it had collected data through its asset management initiative that mapped the location of critical pipelines. As a result, when a resident applied for a construction permit to build a garage, the utility determined that one critical pipeline lay in the path of the proposed construction and could rupture. Therefore, the city of Somersworth denied the permit. Similarly, the Department of Public Works in Denton, Maryland, which provides both drinking water and wastewater services, obtained positive results from applying asset management concepts without having to invest in sophisticated software or perform a complicated analysis. In this case, Denton’s city council was apprehensive about investing in new trucks for the utility even though some of the existing trucks were in poor condition. Council members believed that it would be less expensive to continue repairing the existing fleet. However, using data collected through their asset management initiative, utility managers were able to track the maintenance and depreciation costs associated with these vehicles. As a result, they could demonstrate to their governing body that it was more cost-effective to purchase new vehicles than to continue repairing the older trucks. Because smaller utilities have fewer capital assets to manage, industry officials noted that these utilities can implement asset management by turning to low-cost alternatives that do not require expensive or sophisticated technology. The small utilities can implement asset management using their existing asset data and recording this information in a central location that can be accessed by all of its employees, such as a set of index cards or an Excel spreadsheet. Similarly, the utility can adopt the practices of asset management incrementally, by initially making asset decisions based on their existing data. Opportunities exist for EPA to encourage water utilities’ use of asset management by strengthening existing initiatives. Currently, EPA sponsors several initiatives to promote the use of asset management, such as training and informational materials, technical assistance, and research. While this is a good first step, the entities involved in these initiatives are not systematically sharing information within and across the drinking water and wastewater programs. With better coordination, however, EPA could leverage limited resources and reduce the potential for duplication within the agency. EPA could supplement its own efforts to disseminate information on asset management by taking advantage of similar efforts by other federal agencies, such as the Department of Transportation. Water industry officials also see a role for EPA in educating utility managers about how asset management can be a tool to help them meet regulatory requirements related to utility management. However, the officials raised concerns about the implications of mandating asset management as proposed in legislation being considered by the Congress. Through partnerships with water industry associations and universities, EPA has supported the development of training and informational materials to help drinking water and wastewater utilities implement asset management. In particular, EPA contributed funding toward the development of a comprehensive industry handbook on asset management, which was published in 2002 under a cooperative agreement with the Association of Metropolitan Sewerage Agencies. The handbook lays out the principles of asset management and describes how utilities can use this approach to improve decision making, reduce costs, and ensure the long- term, high-level performance of their assets. EPA has also sponsored materials specifically directed at small utilities. For small drinking water systems, EPA’s Office of Ground Water and Drinking Water published a handbook in 2003 that describes the basic concepts of asset management and provides information on how to develop an asset management plan. In addition, to help entities such as mobile home parks and homeowners’ associations that own and operate their own water systems, the office is developing a booklet on preparing a simple inventory of the systems’ assets and assessing their condition. EPA’s Office of Wastewater Management is funding the development of a “toolkit” by a university-based training center to help small wastewater utilities implement asset management. The toolkit is currently being field tested and is scheduled for release in 2006. Among other things, it includes self-audit instruments to help utility managers to analyze their systems’ needs, training materials, and a summary of lessons learned in the field. In addition to various informational materials on asset management, EPA has sponsored a number of training and technical assistance programs. For example, the Office of Wastewater Management, along with representatives from a major utility and an engineering firm, developed a 2-day seminar on asset management, which will be held at several locations around the country during fiscal year 2004. For smaller drinking water and wastewater utilities, EPA funds state and university-based centers that provide training and technical assistance to small utilities on a variety of matters, including asset management. Specifically EPA’s Office of the Chief Financial Officer funds nine university-based “environmental finance centers” that assist local communities in seeking financing for environmental facilities, including municipal drinking water and wastewater utilities. In fiscal year 2003, the nine centers shared a total of $2 million in funding from the Office of the Chief Financial Officer; some centers also receive funds from EPA program offices for specific projects. According to an official in EPA’s Office of Ground Water and Drinking Water, at least three of the finance centers have efforts related to asset management planned or underway to benefit drinking water utilities. For example, the centers at Boise State University and the University of Maryland provide on-site and classroom training on establishing an asset inventory; collecting data on the age, useful life, and value of capital assets; recordkeeping; financing; and setting rates high enough to cover the full cost of service. Regarding the latter topic, Boise State’s finance center developed a simplified software program, called CAPFinance, which can help smaller systems collect and analyze the data they need in order to set adequate user rates; much of this information can be used to create a rudimentary asset management program. Another eight university-based technical assistance centers receive funding under the Safe Drinking Water Act to help ensure that small drinking water systems have the capacity they need to meet regulatory requirements and provide safe drinking water. In fiscal year 2003, the eight centers shared about $3.6 million in funding from the Office of Ground Water and Drinking Water. According to an official from that office, three of the centers are holding workshops or developing guidance manuals that focus on sustaining the financial viability of small systems in some way; the official believes that much of this material is relevant to implementing asset management. The Office of Wastewater Management funds 46 state and university- based environmental training centers under the Clean Water Act to train wastewater utility officials on financial management, operations and maintenance, and other topics. According to an official with EPA’s wastewater program, one of the 46 centers is developing a series of six training courses to help small wastewater utilities implement some of the basic elements of asset management, such as inventorying system assets and assessing their condition. Once this effort is completed, the center will disseminate the course materials to the remaining 45 centers so that staff from the other centers will be able to teach the asset management courses to operators of small wastewater utilities across the country. EPA has also funded research projects related to asset management. For example, one project—sponsored by EPA, the Water Environment Federation, and the Association of Metropolitan Sewerage Agencies— examined the interrelationship between asset management and other management initiatives, such as environmental management systems, that have received some attention within the water industry. The project found that to varying degrees, the initiatives share a common focus on continuous improvement through self-assessment, benchmarking, and the use of best practices and performance measures. The final report, issued in September 2002, concluded that while the initiatives overlap substantially, they are generally compatible. EPA also contributed $75,000 toward a 2002 report by the Water Environment Research Foundation, which summarized the results of a 2-day workshop held to develop a research agenda for asset management. Workshop participants, who included utility managers, regulators, and industry consultants, identified areas in which they need improved tools and technical approaches, established criteria for evaluating asset management research needs, and identified and set priorities for specific research projects. According to the foundation’s report, the workshop ultimately recommended 11 research projects, 2 of which will get underway in 2004. EPA is contributing $200,000 to one of these projects, which will develop protocols for assessing the condition and performance of infrastructure assets and predictive models for correlating the two. The foundation will fund the second project, which is scheduled to begin in March 2004, and will develop guidance on strategic planning for asset management. According to EPA, the second project will also develop a Web-based collection of best practices on asset management; utilities will be able to purchase licenses to gain access to the materials. The remaining research projects identified in the workshop highlight the need for practical tools to help utilities implement the most fundamental aspects of asset management. They include projects to establish methodologies for determining asset value, compiling inventories, and capturing and compiling information on the assets’ attributes; develop methodologies for calculating life-cycle costs for infrastructure construct predictive models for infrastructure assets that project life- cycle costs and risks; identify best practices for operating and maintaining infrastructure assets by asset category, condition, and performance requirements; and identify best practices for integrating water and wastewater utility databases. In addition, workshop participants recommended a project to assess the feasibility of establishing an Asset Management Standards Board for the drinking water and wastewater industry. EPA could build on its efforts to promote asset management at drinking water and wastewater utilities by better coordinating ongoing and planned initiatives in the agency’s drinking water and wastewater programs. In addition, EPA could leverage the efforts of other federal agencies, such as the Department of Transportation, that have more experience in promoting asset management as well as informational materials and tools that could potentially be useful as EPA and the water industry develop similar materials. While some of EPA’s efforts to promote the use of asset management, such as sponsoring the comprehensive industry handbook, have involved both the drinking water and wastewater communities, it appears that other efforts are occurring with little coordination between the drinking water and wastewater programs or other offices within EPA. For example, the Office of the Chief Financial Officer, the Office of Ground Water and Drinking Water, and the Office of Wastewater Management have funded parallel but separate efforts to develop handbooks, software, or other training materials to help small drinking water and wastewater utilities implement asset management or related activities such as improving financial viability. According to our interviews with EPA officials and representatives of the university-based training and technical assistance centers, no central repository exists for EPA to track what the university- based centers are doing and ensure that they have the information they need to avoid duplication and take advantage of related work done by others. The centers that share information do so primarily within their own network, as in the case of the environmental finance centers, or share information on an ad hoc basis. As a result, the centers are likely to miss some opportunities to exchange information. Similarly, the drinking water and wastewater program offices do not regularly exchange information on what they or their centers are doing to develop informational materials, training, or technical assistance on asset management. EPA officials explained that, to some extent, the organizational framework within which the centers operate contributes to limited information sharing and duplication of effort. As a result, EPA is not maximizing the resources it devotes to encouraging utilities’ use of asset management. In the case of the environmental finance centers, for example, each one negotiates a work plan with the EPA regional office it serves. Although EPA headquarters also has some influence over what the centers work on, the centers primarily focus on regional priorities and work with the states within the regional office’s jurisdiction. Occasionally, EPA’s drinking water and wastewater program offices fund projects at the environmental finance centers that are independent of their regional work plans. For example, the drinking water program provided some funds to the center at Boise State to develop an evaluation tool that states can use to assess utilities’ qualifications for obtaining financial assistance from state revolving loan funds. For the most part, however, the training and technical assistance centers operate autonomously and do not have a formal mechanism for regularly exchanging information among the different center networks or between the drinking water and wastewater programs. EPA has not taken advantage of the guidance, training, and implementation tools available from other federal agencies, which would help EPA leverage its resources. For the purposes of our review, we focused on the Department of Transportation’s Federal Highway Administration because it has been involved in promoting asset management for about a decade and has been at the forefront of developing useful tools and training materials. In 1999, the Federal Highway Administration established an Office of Asset Management to develop tools and other materials on asset management and encourage state transportation agencies to adopt asset management programs and practices. According to officials within the Office of Asset Management, the basic elements of asset management are the same regardless of the type of entity responsible for managing the assets or the type of assets being managed. Simply put, every organization needs to know the assets it has, their condition, how they are performing, and the costs and benefits of alternatives for managing the assets. Over the years, the Office of Asset Management has published several guidance documents on asset management and its basic elements. While the purpose of the guidance was to assist state transportation agencies, Transportation officials believe that the general principles contained in their publications are universally applicable. The office’s guidance includes, for example, a general primer on the fundamental concepts of asset management; a primer on data integration that lays out the benefits of and tools for integrating data, the steps to follow in linking or combining large data files, potential obstacles to data integration and ways to overcome them, and experiences of agencies that have integrated their data; and a primer on life-cycle cost analysis that provides information on how to apply this methodology for comparing investment alternatives and describes uncertainties regarding when and how to use life-cycle cost analysis and what assumptions should be made during the course of the analysis. Transportation’s Office of Asset Management has also developed a software program to assist states in estimating how different levels of investment in highway maintenance will affect both user costs and the highways’ future condition and performance. In addition, to disseminate information on asset management, the office established a Web site that includes its most recent tools and guidance and links to external Web sites with related asset management information, including a link to an asset management Web site jointly sponsored with the American Association of State Highway and Transportation Officials. As EPA began its efforts to explore the potential of comprehensive asset management to help address utility infrastructure needs, officials from the Office of Water met with staff from Transportation’s Office of Asset Management and obtained a detailed briefing on its asset management program. Although EPA officials expressed concerns about having relatively limited resources to promote asset management, they have so far not pursued a closer relationship with Transportation or other federal agencies with experience in the field. For example, EPA may find opportunities to adapt Transportation’s guidance materials or use other efforts, such as a Web site that brings together asset management information from diverse sources, as a model for its own initiatives. Water industry officials support a greater role for EPA in promoting asset management, both as a tool for better managing infrastructure and for helping drinking water and wastewater utilities meet existing or proposed regulatory requirements. However, they stopped short of endorsing legislative proposals that would require utilities to develop and implement plans for maintaining, rehabilitating, and replacing capital assets, often as a condition of obtaining loans or other financial assistance. To obtain views on the role that EPA might play in encouraging the use of asset management, we talked with officials from water industry associations and the 15 utilities that we selected for structured interviews. With few exceptions, the officials agreed that EPA should be promoting asset management in some way, although opinions varied on what activities would be most appropriate. One of the options that garnered the support of many was a greater leadership role for EPA in promoting the use of asset management. For example, 11 of the 15 utilities indicated that based on their own experience, asset management can help utilities comply with certain regulatory requirements that focus in whole or in part on the adequacy of utility infrastructure and the management practices that affect it. While EPA recognizes the link between asset management and regulatory compliance—and has noted the connection in some agency publications and training—some utility officials believe that EPA should increase its efforts in this regard. As examples of regulatory requirements for which asset management is particularly germane, officials from industry associations and individual utilities cited both the existing “capacity development” requirements under EPA’s drinking water program and regulations for capacity, management, operation, and maintenance under consideration in the wastewater program, as follows: Capacity development requirements for drinking water utilities. To be eligible for full funding under the Safe Drinking Water Act’s State Revolving Fund program, state regulatory agencies are required to have strategies to assist drinking water utilities in acquiring and maintaining the financial, managerial, and technical capacity to consistently provide safe drinking water. To assess capacity, states evaluate, among other things, the condition of the utilities’ infrastructure, the adequacy of maintenance and capital improvement programs, and the adequacy of revenues from user rates to cover the full cost of service. Drinking water utilities that are determined to lack capacity are not eligible for financial assistance from the revolving loan fund. Capacity, management, operation, and maintenance requirements for wastewater utilities. As part of its wastewater management program under the Clean Water Act, EPA is considering regulations designed to improve the performance of treatment facilities and protect the nation's collection system infrastructure by enhancing and maintaining system capacity (i.e., peak wastewater flows), reducing equipment and operational failures, and extending the life of sewage treatment equipment. Among other things, wastewater utilities would be required to prepare capacity, management, operation, and maintenance plans for their operations. The regulations would also require utilities to assess the condition of their physical infrastructure and determine which components need to be repaired or replaced. According to industry officials, implementing asset management is consistent with meeting these requirements, and it enhances utilities’ ability to comply with them. For the requirements being considered for wastewater utilities, for example, EPA has concluded that three basic components are a facility inventory, a condition assessment, and asset valuation—all of which are important elements of asset management. Consequently, the officials believe that it makes sense for EPA to place more emphasis on the use of comprehensive asset management. Some water industry officials also told us that EPA should use the relationship between asset management practices and the financial reporting requirements under Governmental Accounting Standards Board Statement 34 as a means of promoting the use of asset management. Under these new requirements, state and local governments are required to report information about public infrastructure assets, including their drinking water and wastewater facilities. Specifically, the governments must either report depreciation of their capital assets or implement an asset management system. Given the infrastructure-related regulatory requirements and utilities’ other concerns about the condition of their assets, it is not surprising that 11 of the 15 utilities we interviewed in depth saw a need for EPA to set up a clearinghouse of information on comprehensive asset management. Several utilities suggested that EPA establish a Web site that would serve as a central repository of such information. This site could provide drinking water and wastewater utilities with direct and easy access to information that would help them better manage their infrastructure. For example, the Web site could gather in one place the guidance manuals, tools, and training materials developed by EPA or funded through research grants and its training and technical assistance centers. The site could also contain links to asset management tools and guidance developed by domestic and international water associations or other federal agencies, such as Transportation’s Office of Asset Management. Several officials also commented that it might be useful to have a site where drinking water and wastewater utilities could share lessons learned from implementing asset management. Other utilities also supported the idea of a Web site, but were uncertain about whether EPA was the appropriate place for it. In commenting on a draft of this report, EPA generally agreed that an EPA Web site devoted to asset management would be worthwhile and is considering developing such a site. In recent years, the Congress has considered several legislative proposals that would, in part, promote the use of asset management in some way. These proposals generally call for an inventory of existing capital assets; some type of plan for maintaining, repairing, and replacing the assets; and a plan for funding such activities. All but one of the proposals made having the plans a condition of obtaining federal financial assistance. The proposals are consistent with what we have found to be the leading practices in capital decision making. As we reported in 1998, for example, routinely assessing the condition of assets allows managers to evaluate the capabilities of existing assets, plan for future replacements, and calculate the cost of deferred maintenance. However, according to key stakeholders, implementing and enforcing requirements for asset management could be problematic at this time. We asked water industry groups, associations of state regulators, and individual utilities for their views on the proposed mandate of asset management plans. While most of them endorse asset management, they raised several concerns about a statutory requirement. For example: Officials from water industry associations believe that drinking water and wastewater utilities are already overburdened by existing regulatory requirements and that many utilities lack the resources to meet an additional requirement for developing asset management plans. The Association of State Drinking Water Administrators and the Association of State and Interstate Water Pollution Control Administrators both said that the states lack the resources to oversee compliance and determine the adequacy of asset management plans. Both the state and industry associations questioned the feasibility of defining what would constitute an adequate plan. Officials at 12 of the 15 utilities where we conducted in-depth interviews had serious reservations about a requirement. For example, some utility managers were concerned that EPA and the states would attempt to standardize asset management and limit the flexibility that utilities need to tailor asset management to their own circumstances. Another concern was that the states lack financial and technical resources and thus are ill equipped to determine whether utilities’ asset management plans are adequate. Finally, some utility officials also questioned the burden that such a requirement would place on small utilities. Other utility officials either support a requirement or support the concept of asset management but question whether mandating such a requirement is an appropriate role for the federal government. One of the officials commented that whether or not asset management is required, utilities should manage their infrastructure responsibly and charge rates sufficient to cover the full cost of service. The National Association of Water Companies, which represents investor-owned utilities, supports a requirement for asset management to ensure that public water and wastewater utilities are operating efficiently and are charging rates that cover the full cost of service. Comprehensive asset management shows real promise as a tool to help drinking water and wastewater utilities better identify and manage their infrastructure needs. Even with their limited experience to date, water utilities reported that they are already achieving significant benefits from asset management. EPA clearly recognizes the potential of this management tool to help ensure a sustainable water infrastructure and has sponsored a number of initiatives to support the development of informational materials and encourage the use of asset management. However, in an era of limited resources, it is particularly important for EPA to get the most out of its investments by coordinating all of the asset management-related activities sponsored by the agency and taking advantage of tools and training materials developed by others—including domestic and international industry associations and other federal agencies with experience in asset management. Establishing a central repository of all asset management-related activities could not only foster more systematic information sharing but also help minimize the potential for duplication and allow EPA-sponsored training and technical assistance centers to build on each other’s efforts. As EPA has recognized, improving utilities’ ability to manage their infrastructure cannot help but improve their ability to meet regulatory requirements that focus on the adequacy of utility infrastructure and management practices. Consequently, it is in the agency’s best interest to disseminate information on asset management and promote its use. Establishing a Web site, perhaps as part of the repository, would help ensure that such information is accessible to water utilities and that EPA is getting the most use out of the materials whose development it funded. Moreover, EPA could use the site as a means of strengthening its efforts to educate utility managers on the connection between effectively managing capital assets and the ability to comply with relevant requirements under the Safe Drinking Water Act and Clean Water Act. Given the potential of comprehensive asset management to help water utilities better identify and manage their infrastructure needs, the Administrator, EPA, should take steps to strengthen the agency’s existing initiatives on asset management and ensure that relevant information is accessible to those who need it. Specifically, the Administrator should better coordinate ongoing and planned initiatives to promote comprehensive asset management within and across the drinking water and wastewater programs to leverage limited resources and reduce the potential for duplication; explore opportunities to take advantage of asset management tools and informational materials developed by other federal agencies; strengthen efforts to educate utilities on how implementing asset management can help them comply with certain regulatory requirements that focus in whole or in part on the adequacy of utility infrastructure and the management practices that affect it; and establish a Web site to provide a central repository of information on comprehensive asset management so that drinking water and wastewater utilities have direct and easy access to information that will help them better manage their infrastructure.
Having invested billions of dollars in drinking water and wastewater infrastructure, the federal government has a major interest in protecting its investment and in ensuring that future assistance goes to utilities that are built and managed to meet key regulatory requirements. The Congress has been considering, among other things, requiring utilities to develop comprehensive asset management plans. Some utilities are already implementing asset management voluntarily. The asset management approach minimizes the total cost of buying, operating, maintaining, replacing, and disposing of capital assets during their life cycles, while achieving service goals. This report discusses (1) the benefits and challenges for water utilities in implementing comprehensive asset management and (2) the federal government's potential role in encouraging utilities to use it. Drinking water and wastewater utilities that GAO reviewed reported benefiting from comprehensive asset management but also finding certain challenges. The benefits include (1) improved decision making about their capital assets and (2) more productive relationships with governing authorities, rate payers, and others. For example, utilities reported that collecting accurate data about their assets provides a better understanding of their maintenance, rehabilitation, and replacement needs and thus helps utility managers make better investment decisions. Among the challenges to implementing asset management, utilities cited collecting and managing needed data and making the cultural changes necessary to integrate information and decision making across departments. Utilities also reported that the shorter-term focus of their governing bodies can hamper long-term planning efforts. EPA currently sponsors initiatives to promote the use of asset management, including educational materials, technical assistance, and research. While this is a good first step, GAO found that EPA could better coordinate some activities. For example, EPA has no central repository to facilitate information sharing within and across its drinking water and wastewater programs, which would help avoid duplication of effort. Water industry officials see a role for EPA in promoting asset management as a tool to help utilities meet infrastructure-related regulatory requirements; they also noted that establishing an EPA Web site would be useful for disseminating asset management information to utilities. The officials raised concerns, however, about the implications of mandating asset management, citing challenges in defining an adequate asset management plan and in the ability of states to oversee and enforce compliance.
Since its opening in 1976, the Smithsonian Institution’s National Air and Space Museum (NASM), located on the Mall in Washington, D.C., has attracted an average of nine million visitors per year. It received the most visitors in 1984, with 14.4 million. The museum had 8.2 million visitors in 1993 and 8.5 million visitors in 1994. Since 1976, 106 aircraft have been on display at the Mall museum. Of NASM’s 344 aircraft, 62 are currently on display at the Mall museum and seen by millions of visitors to the museum. Of the remaining 282 aircraft, 210 are stored at the Paul E. Garber Preservation, Restoration, and Storage Facility, Suitland, MD; 58 are on loan to and exhibited by other museums;12 are stored at Dulles International Airport, VA; 1 is stored at the Department of Defense’s (DOD) Aircraft Maintenance and Regeneration Center (AMARC), Tucson, AZ; and 1 is at Andrews Air Force Base, MD. NASM estimates that 245 of the 344 aircraft are exhibitable, 55 need minor work to become exhibitable, and 44 need major restoration work. Since the early 1980s, NASM has planned to build an extension facility at Dulles International Airport to replace the Garber facility and to display large aircraft that cannot be shown at the Mall museum. The Smithsonian’s earliest acquisition of aviation artifacts was made in 1876, when it received kites from the Imperial Chinese government to celebrate the American Centennial celebration. In 1905, the Smithsonian acquired its first flying machine, Langley Aerodrome No. 5, a model aircraft that made the first successful flight of any unmanned, engine-driven aircraft. Several other aircraft were added to the Smithsonian’s collection before World War I, including the 1909 Wright Military Flyer, the world’s first military airplane. In the decade after World War I, the Smithsonian acquired several World War I aircraft. Paul E. Garber, a Smithsonian employee with an interest in airplanes who joined the Smithsonian in 1920, arranged for the Smithsonian to acquire the “Spirit of St. Louis” in 1928, a year after Charles Lindbergh’s historic solo flight across the Atlantic Ocean. During the 1930s, the Smithsonian added many historic aircraft to its collection, which was housed in a small metal building behind the Smithsonian’s Arts and Industries Building in Washington, D.C. In 1946, Congress passed legislation that the Smithsonian establish a separate air museum, which became the National Air Museum. Mr. Garber also obtained many of the Smithsonian’s World War II aircraft from a collection assembled by Army Air Force General Hap Arnold, who believed that it was in the national interest to obtain one example of each type of World War II aircraft, including captured enemy aircraft. After World War II, most of these aircraft were stored in an automobile factory building in Park Ridge, IL, until the government sought to reactivate the factory for the Korean War in 1950. That collection subsequently was divided between the Smithsonian and the Air Force. The Smithsonian’s then newly organized National Air Museum acquired its share of the collection, which was moved to a 21-acre tract of federally owned land, located by Mr. Garber, in Suitland, MD, about 7 miles from Washington, D.C. The aircraft were mainly stored outside at Suitland from the early 1950s until they were moved into temporary storage buildings that were constructed primarily in the 1950s, 1960s, and early 1970s. “memorialize the national development of aviation and space flight; collect, preserve, and display aeronautical and space flight equipment of historical interest and significance; serve as a repository for scientific equipment and data pertaining to the development of aviation and space flight; and provide educational material for the historical study of aviation and space flight.” According to the legislative history, the museum was designed to display to the public notable exhibits comprising the nation’s air and space collection, including historic and scientific “firsts” such as the original Wright Brothers flyer, the first to fly at Kitty Hawk in 1903; Charles Lindbergh’s “Spirit of St. Louis,” the first solo across the Atlantic Ocean in 1927; the first Earth satellites; and Alan Shepard’s Freedom 7 and John Glenn’s Friendship 7, the first manned spacecraft in the Mercury program. Testifying before Congress in 1964, the Director of the National Air Museum said that the museum “. . . cannot accept one of each airplane and launch vehicle. We accept only those of great historical significance.” The congressional report accompanying the authorizing legislation emphasized that NASM would make possible for the first time a comprehensive presentation to the public of the notable exhibits comprising the nation’s air and space collections. NASM’s collection tracks the country’s early developments of flight to the most recent space ventures. Although NASM’s collection contains some military aircraft, the museum’s focus is not military aviation. By contrast, other federally funded aviation museums, such as the Air Force Museum in Dayton, OH, and the Naval Aviation Museum in Pensacola, FL, primarily display military aircraft. Further, NASM does not display replicas but restores aircraft to original, although not flyable, condition. NASM’s restorations involve using original parts or locating similar parts, or constructing the parts if they cannot be found. Some officials in other air and space museums told us that NASM has very high restoration standards—standards that they said their museums generally could not afford to meet. The Director of the Air Force Museum, for example, said that NASM “wrote the book on aircraft restoration,” and that NASM’s restoration process is “excruciatingly thorough and detailed.” The Air Force Museum Director also said that the museum cannot afford to follow NASM’s restoration standards and must make compromises in seeking originality. The Director of the Champlin Fighter Museum in Mesa, AZ, told us that NASM’s restoration process is more tedious than Champlin’s. An official from the Pima Air and Space Museum in Tucson, AZ, told us that his museum tries to put acquired aircraft on display as soon as possible, limiting restoration work to cosmetic changes. The former director said that NASM’s high standards are intentional and are designed to allow researchers in the future to study the materials and technology originally used to construct aircraft. NASM operates on both federal funds, used primarily for employee salaries, and private donations, which largely fund exhibits. In fiscal year 1994, NASM received about $15.4 million in federal appropriations, grants, and contracts for salaries, travel, research, and supplies. It also received $10.6 million in nongovernmental funds, such as private donations and theater and gift shop revenues. Table 1.1 details the sources of NASM funding for fiscal year 1994. Of the $26 million of funds received in fiscal year 1994, NASM spent about $20 million, as of September 30, 1994. About $6 million of the funds received included revenue from endowments, grants, and gifts that will not be paid out until later years. Table 1.2 shows fiscal year 1994 NASM expenditures in the categories that the museum maintains for its budget data. In December 1994, Senator Kay Bailey Hutchison was contacted by an historic aircraft organization, which said that NASM was not properly managed and in particular was not restoring a sufficient number of aircraft, thereby allowing its collection to deteriorate. We were asked to assess the rate of aircraft restoration; examine the adequacy of facilities for preserving aircraft; and if preservation problems exist, identify options to better care for the aircraft collection. To obtain information about the formation of the Smithsonian’s aircraft collection, we reviewed the legislative history of NASM and historical materials written about the museum and its collection. We also obtained and analyzed NASM data regarding the current number and aircraft in its collection, their condition and location, and the costs of aircraft restoration and storage. We inspected NASM’s Paul E. Garber Preservation, Restoration, and Storage Facility in Suitland, MD, NASM aircraft stored at Dulles International Airport, VA, and DOD’s AMARC in Tucson, AZ. To obtain information about NASM’s plans to collect additional aircraft, we reviewed the museum’s collections rationales for aircraft and space objects. At NASM, we interviewed staff involved in aircraft restoration and preservation, including the Assistant Director for Collections Management, the conservator, 10 restorers, 3 volunteers, and 4 other collections management staff; 5 curators; the former senior curator; the Senior Advisor to the Director; and the former director. We selected NASM employees based on their involvement with managing the collection for our interviews. Some of the individuals we interviewed contacted us on their own initiative to provide information. To compare NASM’s restoration and preservation practices with other federally funded museums, we visited and interviewed officials from the Air Force Museum in Dayton, OH, and the Smithsonian Institution’s National Museum of American History in Washington, D.C., and obtained data from these museums about their restoration staffing levels. We also visited and interviewed officials from the Champlin Fighter Museum in Mesa, AZ, about its restoration of an airplane for NASM under contract. To compare restoration and preservation practices at a nonprofit museum that receives no government funds, we visited and interviewed an official from the Pima Air and Space Museum in Tucson, AZ. We also interviewed other individuals knowledgeable about NASM’s restoration and preservation policies and practices, including the Director of the San Diego Aerospace Museum in San Diego, CA, and the Air Force Historian. We also reviewed reports from 1988 to 1994 of the Research and Collections Management Advisory Committee, an advisory group consisting of academic and museum professionals that was formed by the most recent NASM Director to provide senior management with outside reviews of museum programs. In addition, we interviewed the Advisory Committee Chairman and one of the committee members. We obtained and reviewed materials relating to NASM’s planned extension at Dulles Airport, including space requirements, financing, and future aircraft acquisitions plans. We also reviewed the legislative history regarding the extension and interviewed NASM officials involved in planning the project. To obtain information on repairs needed, recently made, and scheduled for NASM facilities, we interviewed staff and analyzed data from the Smithsonian’s Office of Design and Construction. As agreed with Senator Hutchison’s office, we focused on NASM’s collection of 344 aircraft and did not focus on the 8,000 spaceflight items and 23,800 other artifacts in the NASM collection. We did our work from January through June 1995 in accordance with generally accepted government auditing standards. Our work was done in the Washington, D.C., area; Dayton, OH; Mesa, AZ; and Tucson, AZ. We requested comments on a draft of this report from the Secretary of the Smithsonian or his designee. The Smithsonian’s written comments are included in appendix III and discussed and evaluated in chapter 5. In fiscal year 1994, NASM devoted about 14 percent of its total expenditures on collections management, including aircraft restoration. Management has no firm plans for restoring each aircraft in the collection and has no recognized standards against which to monitor the productivity of the restoration staff. NASM’s collections management staff, who work at the Garber facility in Suitland, MD, said that so much attention is placed on exhibits and research, they generally feel disenfranchised from the Washington, D.C., Mall museum staff. NASM management responded that because exhibits are generally privately funded, they do not take away funds from aircraft restoration. Further, management officials said that they have consistently requested increased funding for collections management, even though those efforts have not been successful. NASM currently employs 12 individuals who work on aircraft restoration out of a total staff of 288, or 4 percent of its workforce. The Air Force Museum, another federally funded air museum, employs 20 restorers out of 90 total staff, or 22 percent. The ratio of restorers to total staff at NASM and the Air Force Museum may not be directly comparable, however, because of differences in the museums’ funding, condition of aircraft, number of visitors, and other factors. In fiscal year 1994, NASM devoted about $2.7 million to collections management, or 14 percent of its total expenditures. NASM’s collections management department staff work mainly at the Paul E. Garber Preservation, Restoration, and Storage Facility in Suitland, MD. Collections management personnel who work at the Garber facility include the restoration staff; personnel who handle the shipping, receiving, and storage of artifacts; the conservator’s staff; and the archival staff. The Garber facility contains the restoration shop, stored aircraft and spacecraft and related parts and artifacts, and film storage. Public access is limited to 3 of the 13 storage buildings at the Garber facility, plus the restoration shop. The archival staff work both at the Garber facility and at the Mall museum, where the photo collections are stored. The collections management department also includes the registrar, who maintains the official object records at the Mall museum. Collections management personnel also maintain the 12 aircraft and 2 hangars that NASM has at Dulles Airport. NASM’s restoration staff told us they spend about half of their time working on tasks other than restoration, such as delivering and hanging aircraft, training and supervising interns and volunteers, performing maintenance on shop equipment, research, and administrative work. During the past 5 years, NASM completed seven restoration projects, while continuing the restoration of four other projects. These 11 projects involved 3 U.S. aircraft and 8 foreign aircraft. NASM spent about $1.4 million to restore 9 of the 11 aircraft; it did not maintain cost records for the other 2. The largest project undertaken was the restoration of the “Enola Gay,” which took over 10 years to complete at a cost of about $809,000. During the past 5 years, NASM spent a total of $11.3 million for collections management. Table 2.2 shows a list of restoration projects that NASM worked on from 1990 to 1995. NASM’s former Senior Curator, who still works at the museum as a volunteer, told us that the restoration staff’s productivity has decreased in recent years. He attributed that productivity decline to (1) the restoration staff being diverted from restoration to other tasks, (2) little or no interest shown by the museum management in restoration, and (3) a decrease in the curators’ involvement in restoration. Although NASM prepares a yearly restoration schedule, it does not have a long-range plan for which aircraft it plans to restore beyond the coming year or specifically what work is needed for each airplane in the collection. Further, NASM does not determine the relative importance of each aircraft or whether and how each aircraft will be used in future exhibits. In commenting on a draft of this report, Smithsonian officials said it was more important to explain why an aircraft was collected and what role it plays in the collection than to plan its use in future exhibits. NASM does not use any work measurement standards or other estimates of the time it should take to prepare work or aircraft restorations. The Assistant Director for Collections Management told us that she does not have a technical background in aircraft restoration and does not know how long the restoration work should take. She said that she relies on the restoration shop foreman to evaluate the restoration staff’s performance and provide technical guidance to them. At our request, NASM estimated the amount of time it would take to restore all aircraft currently in its collection that needed work. NASM said that of the 344 aircraft in its collection, 245 were exhibitable, 55 needed minor work to become exhibitable, and 44 needed major restoration work. Assuming that a 12-person restoration staff worked full-time on restoring aircraft, NASM estimated in May 1995 that it would take about 52 years to restore the 99 aircraft. However, since the restoration staff told us they spend only half of their time on restoring aircraft, we estimated that it would take about 100 years to restore the 99 airplanes, assuming that no additional aircraft needing work were added to the collection, the current staffing trends continue, and the restoration staff continue to spend half of their time on other work. When we asked about the rate of restoration at NASM, the former NASM Director said that he saw no need to accelerate the restoration backlog or to plan NASM’s restoration work for the next 50 years because too many changes in restoration techniques would occur over that period. He said that because of the cost of restoring the “Enola Gay,” NASM has adopted a new policy whereby any large planes will be accepted only if they do not need restoration work. He added that the museum’s mission is broader than restoration and includes research and education, which also have to be supported. The former director also said that it is harder to obtain additional resources for collections management from outside sources than it is for exhibits. He said that NASM had received some private donations in recent years, including a $250,000 corporate gift that was made after loaning spacecraft and aircraft to Japan, new paint-mixing equipment worth $50,000 from a U.S. corporation, an airplane hangar at Dulles Airport worth $100,000 from a group of local construction companies, $27,000 for the “Enola Gay” restoration from veterans, and restoration of two engines of the “Enola Gay” by the San Diego Aerospace Museum. We asked members of the collections management staff, including the Assistant Director for Collections Management, the conservator, 10 restorers, 4 employees involved in maintaining the collection, and 3 volunteers, about the rate of aircraft restoration. The staff generally were not satisfied with the current restoration efforts at the museum and indicated that they felt disenfranchised from the curators and management at the Mall museum. The collections management staff said that (1) collections management, including aircraft restoration, is given a low priority compared to other museum activities, such as exhibits, research, publishing, the Laboratory for Astrophysics, and the Center for Earth and Planetary Studies (CEPS);(2) too much of their time is spent on tasks other than restoration; (3) additional restoration work is required on aircraft because the collection has not been properly maintained; (4) NASM’s management staff and some curators do not provide effective leadership because they do not have adequate backgrounds in museums, aircraft, or spacecraft; (5) little interaction occurs between the restoration staff and the curators; (6) management wasted funds when it recently held a 3-day retreat outside of Washington, D.C.; and (7) some recent exhibits, such as one on Barbie dolls, contain few or no aircraft. We asked NASM management to comment on these concerns. Management’s primary responses were that (1) exhibits are generally privately funded and do not take away funds from restoration; (2) the collections management department is tasked with many responsibilities in addition to aircraft restoration; (3) NASM’s requests for increased funding for collections management have been rejected by the Smithsonian or the Office of Management and Budget (OMB); (4) NASM’s management staff have backgrounds in museums, aircraft, and management; (5) the Smithsonian requires curators to spend time on research and publishing; (6) the retreat was useful to prepare the museum’s mission statement; and (7) that one manager had opposed creating new exhibits with few or no artifacts. A comparison of the collections management staff’s views and management’s responses to them is provided in table 2.3. The concerns raised in table 2.3 and NASM’s responses show a high level of disagreement and morale problems among the staff who are responsible for preserving and restoring NASM’s artifacts. While these concerns were not the focus of our review, our overall work in the management area indicates that some of the conflict that exists could be explained by communications problems and the different perspectives of collections management staff and NASM management. NASM’s focus on research and exhibits, compared to collections management, also has been cited by the Research and Collections Management Advisory Committee, an advisory group formed by the former NASM Director and consisting of academic and museum professionals. The committee’s 1994 report indicated that curators perceive that research and exhibition are the only work that counts for advancement, and as a result, many do not spend time with the collection, visit the Garber facility, or address collections issues on an ongoing basis. “the leadership of the Museum must continue to focus its attention on collections management issues. The perception amongst the staff is that the Director is most interested in and concerned with research, publication, exhibition, and scholarship generally, and while the Committee knows of the leadership’s dedication to the collection and its care, it believes that this commitment needs continually to be communicated outward to the rest of the staff: to the curators, who need to be reminded of the realities of limits and resources; and to the collections management staff, which often must struggle internally inside the Museum to get the attention and cooperation of other staff.” Morale problems among the collections management staff are not new. According to a 1982 book by a former NASM Director, in the early years of the Garber facility, “a split developed, whereby the people at Silver Hill regarded themselves and were regarded as blue-collar renegades, necessary, but somehow not part of the Smithsonian.” The author added that no one really knew or cared how hard and how well the Silver Hill crew was working. In 1994, the Advisory Committee noted that friction between the curators and collections management staff remained, in spite of improved dialogue and increased contact. In commenting on a draft of this report, Smithsonian officials said that feelings of disenfranchisement on the part of the collections management staff resulted from a number of factors, most notably resource-related matters. A NASM Collections Management Advisory Committee member also expressed the view that NASM management and exhibits appeared to be geared more toward pleasing academic peers than the public, and that exhibits have too much interpretation of the role that aircraft and spacecraft played in history and society. He cited, for example, the criticism that occurred with respect to the interpretation that was contained in a proposed script for the exhibit of the “Enola Gay,” the aircraft that dropped the atomic bomb over Hiroshima, Japan. In January 1995, 81 Members of Congress wrote to the Secretary of the Smithsonian, complaining that the former director’s actions in drafting the exhibit script “were a slap in the face to all the parties who contributed their time and expertise in creating an exhibit that best reflects the contributions that all Americans made to the culmination of World War II” and demanding the NASM Director’s resignation. On May 2, 1995, the NASM Director resigned, citing the controversy involving the exhibit, which received considerable media attention. NASM is now displaying only part of the plane, without extensive commentary. Even if NASM were to restore more aircraft, the museum does not have adequate storage facilities to protect them from deterioration. Current conditions are much improved since the time when much of the collection was stored outdoors, and some repairs have been made in recent years to the Garber facility. However, the buildings in which the aircraft are stored do not have humidity controls or air-conditioning, and only a few are heated. As a result, the collection in storage is continuing to deteriorate, including previously restored aircraft. NASM has consistently requested increased funding for collections management and for storage facilities repairs in recent years, but NASM must compete with other Smithsonian museums for limited resources and has been unable to obtain needed funding. In the absence of additional funding, NASM has not developed a strategy to pursue alternatives to lessen the storage burden, such as loaning aircraft to other museums for 5 to 10 years for display in exchange for their restoring the aircraft for NASM or deaccessioning items with less historical or technological significance. The Smithsonian’s collections management policy, issued in May 1992, requires museums to ensure that collections are maintained in conditions intended to preserve and extend physical integrity. Under the policy, prudent collections management requires the identification and elimination or reduction of damage to the collection, such as deterioration. The National Park Service, which has prepared guidance on museum collection policies, indicates that collections should be maintained in storage facilities with appropriate levels of relative humidity and temperature. Much of NASM’s aircraft collection was stored outside in Suitland, MD, from the early 1950s until they were moved into temporary storage buildings that were constructed primarily in the 1950s, 1960s, and early 1970s. Although moving the aircraft indoors was an improvement over storing them outdoors, the Garber storage facilities are still not environmentally controlled. The wood, fabric, and even metals used in aircraft are susceptible to deterioration and corrosion when exposed to great differences in temperature and humidity, even though aircraft may be protected from rain and snow. Storing the aircraft outdoors and later in facilities that were not environmentally controlled caused aircraft in the collection to deteriorate, which meant that additional restoration work had to be done. NASM currently has 236,300 square feet of storage space at the Garber facility, including the restoration shop, and 50,200 square feet of space at Dulles Airport, or a total of 286,500 square feet. Of the 286,500 square feet of space for storage and the restoration shop, 101,500 square feet are heated. The storage space is overcrowded and lacks humidity controls. Overcrowding has resulted in 5 of NASM’s 344 aircraft being stored outdoors: 2 are at Dulles International Airport; 1 is at AMARC in Tucson, AZ;1 is at Andrews Air Force Base, MD; and 1 is on loan at the Pima Air and Space Museum in Tucson. From 1991 to 1994, NASM undertook a conservation assessment, examining the condition of the museum’s 13 storage buildings at the Garber facility and the condition of the artifacts contained in them. According to the assessment, the buildings and artifacts are suffering from wide temperature fluctuations, leaky roofs, structural problems, and dirt and dust accumulation. Moreover, the reports indicated that the building conditions are promoting the deterioration of the collection, including restored aircraft. For example, the assessment on a building that contains restored aircraft indicated that “the restoration process alone cannot be considered a solution because many restored objects in Building 20 are deteriorating . . . . Almost all recently restored aircraft in Building 20 have evidence of corrosion.” Excerpts from the reports are contained in appendix I. The conservation assessment also commented on overall preservation practices at the Garber facility. According to a June 1993 report, “he condition of many objects stored at the Garber Facility illustrates what can happen when museum administrators permit a collection to grow and develop without providing direction and funding for its preservation . . . . reservation is a primary museum responsibility, moreso than education, research, or exhibition, given that those functions are, or should be, collection-dependent. Therefore, preservation is not an option or a low priority, nor is it a one-time budget expense. It is a continuous process that requires adequate levels of staffing and funding.” In the past 10 years, the Smithsonian spent $9.1 million to improve the Garber facility, including roof repairs, asbestos removal, and storm-water structures, or an average of $910,000 per year. Also, a new artifacts storage building to be shared with the Smithsonian’s National Museum of American History (NMAH) and a new chemical building for NASM are being constructed at a cost of about $1.4 million. While these repairs have helped improve conditions at the Garber facility, much more repair work is needed. We interviewed officials from the Smithsonian’s Office of Design and Construction, which is responsible for maintaining and repairing Smithsonian facilities and asked about the feasibility of obtaining additional repair funds for the Garber facility. The Design and Construction officials indicated that during the last 5 years, the Smithsonian spent over 8 percent of its total repair funds on NASM facilities (including the Garber facility and the Mall museum), which represent 7 percent of the square footage of all Smithsonian facilities, and spent 5 percent of its total repair funds on the Garber facility alone, which represent 3 percent of the square footage of all Smithsonian facilities. The officials also said that over the next 5 years, the Mall museum needs at least $33.8 million in repairs and that the Garber facility needs at least $7.4 million in repairs. However, the officials said that it is unlikely that NASM will receive the needed repair funds because NASM must compete with other Smithsonian museums for scarce repair funds. The Design and Construction officials said that the Smithsonian has a backlog of $250 million in deferred maintenance for all of its museums, can only afford to make about $25 million in repairs each year, and accrues another $32 million to $35 million in additional repair work each year. Because new requirements exceed available funding each year, the backlog of deferred work will continue to grow. The Office of Design and Construction officials also said that recent improvements that have been made to the Garber facility are not expected to last long. They added that some of the Garber buildings have structural problems and may not be repairable. NASM must compete with other Smithsonian museums for its overall funding, including collections management, as well as repair funds. In 4 of the past 5 years, NASM’s requests for increased funding for collections management have been turned down either by the Smithsonian or OMB. In fiscal year 1994, for example, NASM requested an additional $395, 000 and 3 additional positions for collections management. The Smithsonian reduced that request to $150,000 and 1 additional position, which OMB rejected. For fiscal year 1996, NASM requested an additional $576,000 and 9 additional positions for collections management. The Smithsonian reduced the fiscal year 1996 request to $411,000 and 1 additional position, which OMB rejected. “NASM has the unique mission to preserve the technology represented by the history of aviation and spaceflight, by preserving the vehicles in which early pioneers broke speed records, explored new worlds, fought aerial battles, and sought data about our universe. Judging by the millions of visitors who visit the Museum and by the many letters from the public urging us to step up our efforts to preserve this evidence of flight, there is a broad base of public support for artifact restoration. Sadly, without additional resources, some of our treasures may be lost.” Museum officials said that they must rely on federal funds for maintaining their facilities because of the difficulty of raising private funds for storage facilities with no public access. Another official said that donors want their contributions to be visible, for example, with exhibits, where the contributors’ names can be prominently displayed. While we do not take any position on how NASM should allocate its resources, the Assistant Director for Collections Management suggested reassigning some curatorial staff to collections management temporarily to address critical collections care problems. She also said that the Smithsonian should increase its focus on the care of the collection and place less emphasis on research until major collections problems are under control. NASM’s curators are required to conduct scholarly research in their fields of interest, as well as assume other responsibilities involving exhibits, managing the collection, and public service. We visited AMARC in Tucson, AZ, where NASM currently stores one aircraft.An AMARC official said that NASM does not pay to maintain its aircraft, as the other AMARC customers do. The official said that maintenance normally involves putting a new protective coating on the aircraft and new oil in the engines every 6 months, which involves about 4 hours of work and generally costs a few hundred dollars per plane. The official also said that no one from NASM had visited AMARC to inspect its aircraft for 2 or 3 years. The official added that, even though NASM does not pay to have its aircraft maintained, AMARC took care of two of NASM’s aircraft at no cost to NASM because they were on display. NASM has attempted to preserve some of its aircraft that are stored outside at Dulles Airport and Andrews Air Force Base. Two of NASM’s aircraft that are stored outside at Dulles Airport —a Lockheed 1049 Constellation and a Lockheed C-130A Hercules—and another that is stored outside at Andrews Air Force Base—a Grumman A-6E Intruder—are connected to dehumidifiers. The airplane at Andrews Air Force Base is housed in a container. The Collections Management Advisory Committee reported in 1991 that, while the facilities improvements that have been made at the Garber facility were substantial, they were short-term fixes. Stating that a new extension must be built to provide adequate indoor storage space, the committee said in its 1991 report that the longer the delay, the higher the cost in deteriorating artifacts and interim expenses. The committee added that some of the deterioration is irreversible. According to Smithsonian policy, museums normally establish minimum standards of physical care and regular schedules for the maintenance of collections. NASM Collections Management staff said that they do not conduct formal, periodic inspections of aircraft in storage. Smithsonian officials said that curators are aware of the condition of aircraft when they are acquired and are responsible for reviewing the condition of those in storage. Smithsonian officials said that some curators spend an average of 30 percent of their time on collections management and that one curator spent 3 years working on an engine maintenance program. However, some curators told us that they generally do not inspect the collection unless they have a reason to do so, such as preparing an exhibit involving the artifacts. NASM has adopted a new policy that 0.5 percent of the collection be randomly inspected by 1996 and then the same amount be inspected on a biannual basis. The proposed policy was submitted to the Smithsonian for approval in July 1994, but was not approved until May 1995. NASM’s Assistant Director for Collections Management said the Smithsonian does not give collections management a high priority. We asked NASM officials about who is responsible for seeing that the collection is properly cared for—the curators or the collections management staff. The Chairman of the Aeronautics Department said that the curators and collections management personnel have joint responsibility. However, the former senior curator said that it is unclear who is responsible for the collection. The collections management staff said that the Collections Management Department is responsible for the physical care of the collection. The former NASM Director said that the curators are supposed to know the condition of the artifacts in their collections and, if they notice a problem, are to bring it to the attention of the collections management staff, who then prepare a correction plan with the assistance of the curators. We interviewed officials from the Smithsonian’s NMAH, who told us that NMAH faces the same, if not worse, storage problems as NASM. NMAH officials said that in the early 1980s, the museum was forced to decide that it could no longer collect large objects because of the lack of storage space. The officials said that of the seven storage buildings NMAH has at the Garber facility, all have leaky roofs, three have asbestos, and one is quarantined because of asbestos contamination. In a 1991 article for a Smithsonian publication, NASM’s conservator said that collections storage is often the most neglected function within a museum. He said that exhibition is generally considered a higher priority and receives a greater share of the funding, and that storage is considered by many to be a static function requiring only space. The conservator also noted that one of the major threats to important collections is poor storage and that as a result of poor storage (1) objects could be misplaced, (2) important information could be lost, (3) irreversible damage could develop and progress go undetected, and (4) theft and damage could go unnoticed for years. NASM’s conservator told us that NASM’s aircraft need more preservation and less restoration. He said that restoring aircraft is not needed to preserve them. The former NASM Director said that he would like to move the entire collection at the Garber facility to the museum’s planned new extension facility, which is discussed in detail in chapter 4. However, the Research and Collections Management Advisory Committee has recommended that improvements be made to the Garber facility, despite the extension plans. In its 1990 report, for example, the committee noted needed improvements in the chemical treatment facility and welding shop, and overcrowding at the Garber facility. In the minutes of its 1989 meeting, the committee indicated that NASM should not rely on the prospect of the future extension, which may be years away, to substitute for today’s crucial conservation and storage needs. Because NASM lacks a clear focus regarding its mission, its collection includes some aircraft whose historical significance has been questioned, some duplicate aircraft, and a number of foreign aircraft. It is not clear whether this fulfills Congress’ original intent to establish a national museum that showcases this country’s most important aviation achievements. Reducing the size of the collection and undertaking second-party aircraft restorations with temporary display loans are viable alternatives to lessen NASM’s burden of caring for a large aircraft collection. However, NASM has not developed a strategy to deaccession aircraft. It also has not accelerated pursuing second-party restorations with temporary loans, despite repeated recommendations to do so by its advisory committee. Congress intended that NASM collect, preserve, and display aeronautical and space flight equipment of historical interest and significance. While we have no basis for determining which aircraft should be included in NASM’s collection, some other individuals familiar with NASM’s aircraft collection that we contacted questioned whether NASM should have collected certain aircraft. The Director of the Air Force Museum, for example, questioned the wisdom of NASM having acquired a large collection of World War II Japanese aircraft. NASM’s Senior Advisor to the Director questioned whether the museum should have acquired a Boeing 727—a commercial aircraft still widely used. Further, a NASM curator, who is a former Air Force pilot, questioned whether the museum needs two McDonnell F-4s. Moreover, the Air Force Historian, who is a former NASM curator, said that NASM’s collection is disorganized and is too large to care for. “must get serious about deaccessioning, and expand where feasible its loan program. We recognize the reluctance on the part of the staff to part with important artifacts and the fear that other institutions, as well as the process of physical transfer, might produce some damage. But the collection is simply too large—for what this Museum needs in terms of a national collection, in terms of balance, and most compelling, in terms of the Museum’s ability to prevent artifacts from deteriorating. And the Museum cannot accomplish its mission of preservation and the diffusion of knowledge if so much of its collection is hidden from public view in storage facilities that do not meet minimum museum standards.” In its most recent report, the committee again recommended that NASM accelerate deaccessioning and loans of aircraft and even suggested that the museum consider seeking authority from Congress to sell part of its collection. We asked the former NASM Director what the museum has done to respond to the committee’s recommendation regarding reducing the size of the collection. He said that it is not easy to find other museums to take NASM aircraft. Also, NASM’s Senior Advisor to the Director said that there is a reluctance to deaccession aircraft, because some NASM staff believe everything in the collection is valuable. From 1991 to 1995, NASM deaccessioned 11 aircraft. Another six aircraft have been identified for deaccession, but NASM cannot find responsible museums willing to accept them, according to NASM officials. The Assistant Director for Collections Management said that, while NASM may not have too many aircraft to reflect the history of aviation, the museum does have more planes than it can care for. She also said that there should be more coordination between NASM and other national museums that operate on federal funds, such as the Air Force and Navy museums, to avoid duplication. The Air Force Historian agreed that NASM should not duplicate the collections of the Air Force and the Navy. The Chairman of the Research and Collections Management Advisory Committee told us that there has been disagreement among museum staff about whether NASM should deaccession aircraft. The Advisory Committee Chairman also said that, since the former NASM Director was not a historian, he had to rely on the advice of expert curators regarding deaccessions, but that the experts could not agree on what aircraft, if any, to dispose of. The chairman said that NASM has the greatest aircraft collection in the world, but that it needs to be pruned. Moreover, the chairman said that because the museum had such a large and varied constituency, including the public, military, airplane buffs, and film makers, deaccessioning aircraft would be difficult. Another alternative that could lessen NASM’s burden of caring for its aircraft collection would be loaning its aircraft to other organizations for display over a temporary period, such as 10 years, in exchange for having them restored. NASM officials told us that they are using second-party restorations and in fact developed the legal and contracting procedures to undertake such work, and would welcome similar proposals from other institutions. While NASM is using this option, it has no detailed strategy to determine whether there are additional opportunities to use it. NASM reported that it loaned 18 aircraft to other museums for restoration and storage during 1993 and 1994. Of these 18 aircraft, NASM initiated the loan of 8 aircraft—6 for restoration and 2 for storage. The other party initiated the loans for the other 10 aircraft. For example, the Champlin Fighter Museum in Mesa, AZ, contacted NASM about restoring a Kawanishi N1K2-J, nicknamed the “George,” for NASM in exchange for being able to display it for 10 years. Table 3.1 shows the six aircraft restoration loans that were initiated by NASM in the last 2 years. The Assistant Director for Collections Management told us that NASM does not have an active program to identify outside restorers. The former NASM Director agreed and said that NASM must be careful about who it allows to restore its aircraft, since many museums do not meet NASM’s restoration standards. To help overcome this concern, NASM provides standards that second-party restorers must follow. These standards, along with careful screening of the capabilities of second parties, coupled with periodic monitoring of work in progress during restoration, can help ensure that adequate standards are followed. In addition, NASM officials said that the loan program requires considerable staff time for crating, shipping, and other related tasks that reduces staff time available for in-house restoration efforts. The Collections Management Advisory Committee has also recommended that the museum expand its loan for restoration program. In its 1990 report, the committee reported that there was some resistance by NASM staff about the program because of concern that the quality control of restoration would be lost. However, the committee noted that, given the positive experience involving the San Diego Aerospace Museum’s restoration of one of the four “Enola Gay” engines, the program should be expanded. We also noted that NASM was satisfied with the Champlin Museum’s restoration of the Japanese World War II fighter, the “George.” The Director of the San Diego Aerospace Museum told us that his museum would be interested in restoring entire aircraft for NASM in the future. We did not survey other museums about their possible interest in restoring aircraft for NASM, but it is possible that there may be others who might be capable and interested. Another approach undertaken by the Air Force Museum in Dayton, OH, and the Naval Aviation Museum in Pensacola, FL, involves providing other museums with two aircraft for restoration, allowing them to keep one and restore and return the second. Likewise, NASM has provided three aircraft to a German museum, which, after restoring the aircraft, plans to give one to the Air Force Museum, return one to NASM, and keep the third one. NASM officials cited plans to build an extension at Dulles Airport, VA. as the solution to the museum’s storage and restoration problems. However, it is uncertain when or whether the extension will be built, given the museum’s need to raise at least $100 million in private funds for its construction. Also, NASM would like to acquire 80 aircraft over the next 30 years, which would exacerbate its current storage problems. In the early 1980s, the Smithsonian began looking for a site on which to build an extension for NASM to store its aircraft collection currently housed at the Garber facility and large aircraft to be acquired in the future. By the mid-1980s, the extension was planned also to house the restoration facilities at Garber and to display aircraft on a limited basis. By 1989, NASM wanted the extension to include a theater, a restaurant, and a museum shop, as well as expansion space for other Smithsonian bureaus. A key consideration in selecting a site was access to an active runway to accept large aircraft that could not be transported to the museum on the Mall. As early as 1983, after considering several sites, the Smithsonian chose Dulles Airport as its preferred site. In 1988, the site selection process was reopened after the Governor of Maryland expressed an interest in locating the facility at the Baltimore-Washington International Airport. Then, in 1990, the City of Denver submitted an unsolicited proposal to locate the extension at Stapleton International Airport. In February 1991, we testified that the Smithsonian’s site selection process had not adequately considered and justified its selection of Dulles Airport as the extension site. The Smithsonian subsequently provided information and analysis needed to support its selection of Dulles Airport as the extension site and its decision to reduce the estimated cost of the extension from $325 million to $162 million. In March 1991, we informed the Chairman of the House Appropriations Subcommittee on Interior and Related Agencies that, in light of the Smithsonian’s additional analysis, its decision to locate the extension at Dulles Airport could be objectively defended by the Smithsonian. NASM is currently planning to build a 670,000 square-foot extension facility at Dulles Airport at a cost of $162 million. NASM plans to finance the extension through private fundraising and funds pledged by Virginia. According to NASM officials, Virginia has agreed to provide the Smithsonian with a $3 million interest-free loan and has pledged to finance site-work improvement for highway access, airplane taxiways, and parking, plus $6 million in construction costs. In addition, the Governor of Virginia has indicated that the state is committed to issuing up to $100 million in bonds to assist in capital construction. Under the proposal, the Smithsonian will make lease payments to Virginia equal to the debt service, and once the debt is retired, title to the facility will pass to the Smithsonian. The Smithsonian will be responsible for all of the extension’s operating costs. In August 1993, legislation was approved authorizing the extension and the appropriation of $8 million in federal funds for its planning and design. However, in July 1995, the President signed a bill to rescind $4,175,000 in planning funds that had been appropriated for the extension. In September 1995, a congressional conference committee adopted a report regarding a fiscal year 1996 appropriations bill authorizing the appropriation of $1 million for planning and design of the Dulles extension. As of September 26, 1995, Congress had not yet approved the conference report. NASM officials estimated that the extension could open sometime during 2000 to 2005, but added that rescinding the planning funds could postpone the extension opening. The former NASM Director told us that to raise $100 million in private funds, NASM may need to enter into joint ventures with companies, allowing them, for example, to hold permanent aerospace trade fairs at the extension. The former director also said that the extension may incorporate entertainment rides and simulators, and that corporations may be permitted to use their logos in exchange for financial support. NASM has begun formulating a financing plan for the extension. Some NASM officials said that it would have been difficult for the museum to raise the extension funds under the former director because of the controversy involving the exhibit of the “Enola Gay,” discussed in chapter 2. It still remains uncertain, however, if the Smithsonian will be able to obtain the support needed to construct the extension. In its 1994 report, the Research and Collections Management Advisory Committee noted that NASM management lacked a consensus on the expected benefits of the extension, which were never extensively discussed or debated within the Smithsonian. The committee recommended that the museum formulate a mission statement for the extension. The former NASM Director told us that he was developing a mission statement and that he would like the extension to allow the museum to tell stories involving the (1) history of the Cold War, (2) effects of the World War II bombing campaign, (3) impact of the revolution in air travel for Americans, and (4) benefits to society provided by downward-looking satellites. In March 1995, the Secretary of the Smithsonian contracted with the National Academy of Public Administration (NAPA) to review the management of NASM. NAPA is to complete its study in the Fall of 1995, and is to pay particular attention to an examination of NASM’s mission. As part of the study, NAPA said it would review whether NASM is adequately considering the mission of the extension. In August 1995, Smithsonian officials said that the Board of Regents would shortly review the scope of NASM’s mission. Plans for the extension include 670,000 square feet of space, compared to 286,500 square feet of space at NASM’s current storage and restoration facilities. We asked museum officials about the feasibility of immediate construction of a restoration shop and storage facility at Dulles Airport, as part of the first phase of the extension, to replace the current 286,500 square-feet of storage space. The Assistant Director for Collections Management said that it would be best to start the extension project by building a restoration shop with public access, then build display and storage space, followed by construction of storage-only space. She also said that the extension must have amenities to attract donations and that Virginia has indicated that it may not be interested in providing funds if the facilities were not accessible to the public. The former senior curator said that the NASM Director should be given a mandate to build the extension, such as was given to the former NASM Director in opening the museum in 1976. He noted that in that situation, former astronaut Michael Collins was selected as the NASM Director because he had the needed experience and visibility in working with Congress. NASM would like to collect 80 aircraft of all types over the next 30 years, even though it cannot properly care for the collection it has now. Included among these 80 aircraft are such large aircraft as the Boeing 747 and Boeing B-52. The 80 aircraft, which were contained in a collections rationale prepared by NASM in 1989, are listed in appendix II. The 1989 collections rationale noted that past attempts by the museum to prioritize the collection effort were unsuccessful. The rationale acknowledged that after over 80 years of collecting aeronautical artifacts, NASM at that time barely had adequate exhibition and storage space for the aircraft in its collection or for large aircraft yet to be acquired. The 80 aircraft that NASM would like to acquire include 23 general aviation aircraft; 11 commercial aircraft; 12 military aircraft; 11 light, ultralight, or homebuilt aircraft; 15 gliders; and 8 vertical flight aircraft. The collections rationale contains justifications for each proposed acquisition. For example, the acquisition of the Boeing 747 was justified because it epitomized the new era of wide-bodied airliners. The rationale also listed some practical criteria to be considered before acquiring additional aircraft: (1) the aircraft must be obtained, preserved, restored, and exhibited at a reasonable cost; (2) it can be transported to the museum at a reasonable cost; (3) it has research, scholarship, and/or educational value; and (4) it meets the physical requirements for exhibition. In its 1990 report, the Advisory Committee noted that the museum’s aircraft collections rationale, which has not been updated since it was last revised in 1989, had no clear goal or guidance. The committee said that to be useful, the rationale should include (1) a statement of the scope of the collection, (2) an explanation of how the scope of collecting relates to the charter of the museum, and (3) an explanation of how professional standards are maintained throughout the collections process. Further, the committee indicated that the rationale lacked safeguards to ensure that additions to the collection are not made if they further jeopardize the museum’s ability to cope with the existing collection and if the new additions cannot be cared for adequately. Such a practical consideration as this would seem critical to NASM in view of its traditional inability to obtain funding to adequately care for its collection. NASM’s 1991 Space History Department’s rationale also included practical criteria for assessing priorities in collecting future space artifacts. It represented these criteria in the following questions to be applied, in addition to technical criteria, when deciding whether to acquire space artifacts. Is the same object preserved elsewhere in a safe or permanent museum, or does it rightly belong in another, more appropriate museum? Can the object be preserved by the means at hand, or is its preservation beyond the capability of NASM? Is the object too large to be collected and preserved, or might parts of the object adequately represent its history? By incorporating these criteria in the written aircraft collections rationale, NASM could lessen the impact of exacerbating the museum’s ability to care for the existing collection and seeing that new additions are adequately cared for. In commenting on a draft of this report, NASM officials said that although the collections rationale suggests that 80 aircraft might be added to the NASM collection by 2020, it should be understood that the presence of an aircraft in the rationale does not constitute permission to collect it. Officials said that a curator desiring to collect an aircraft must follow a set of carefully crafted procedures that have been established over the past 6 years. Under the procedures (1) the object must appear in the rationale or replace a similar object listed in the rationale and (2) the acquisition must be proposed and defended before the Aeronautics Department Collections Committee, which consists of all of the curators, and the NASM Collections Committee, which consists of both curatorial and collections management staff. NASM officials also said that over the past 5 years, no aircraft has been acquired without provision being made for its appropriate storage and care. NASM officials said that from 1989 to 1995, it acquired only 12 aircraft, compared to the acquisition of 56 aircraft from 1982 to 1988. Although NASM is popular with the public and has preserved many of our nation’s historic air and space artifacts, the management of the aircraft collection that is not generally seen by the public needs improvement. NASM commits relatively few resources to aircraft restoration, compared to other museum activities and another federally funded air museum. But, even if NASM were to increase its restoration efforts, the museum would have no place to properly display or store the aircraft. Therefore, it is important for NASM to determine how to better preserve its collection in view of the limited financial resources available for aircraft restoration and storage, including determining what size collection can be adequately supported. Since NASM was established, certain aspects of the museum’s mission as a national air and space museum have been vague. For example, the legislation does not specify whether the museum should duplicate collections at other federally funded air and space museums or whether a national museum should include foreign aircraft. Once NASM’s mission is clarified, NASM would be better able to develop criteria for what constitutes historically and technologically significant aircraft and, in the context of such criteria, which aircraft it should have in its collection to fulfill its mission, considering available resources and the adequacy of storage facilities. If it is determined that NASM’s current collection is too large in view of resources and facilities available, options to reduce the collection size so that the remaining aircraft can be stored or displayed in space with adequate environmental controls include deaccessioning aircraft and obtaining second-party restorations with temporary loans to other museums. Using additional second-party restorations would help preserve NASM’s collection, alleviate its storage capacity problems, and share its collection with the public. The planned extension at Dulles Airport should help alleviate NASM’s storage facility problems, but funding is uncertain and the extension may take several years to complete. One option that may be available to reduce costs would be to limit the new space to the same size of current storage facilities. If feasible, this option should help NASM expedite plans to replace its deteriorating storage facilities with new storage and restoration space at Dulles Airport with proper environmental controls. Including lower cost, limited public access and a few amenities would make the new space more useful. In addition to storing aircraft in substandard space, NASM does not have a management plan for each aircraft that describes (1) whether and how the aircraft will be used in future exhibits, (2) to what extent and when it will be restored, and (3) who is responsible for monitoring its condition. The lack of resources devoted to collections management has resulted in the restoration staff feeling disenfranchised from the Mall museum staff. We recommend that the Secretary of the Smithsonian, together with the Acting NASM Director consult with the appropriate Committees of Congress to better define the mission of a national air and space museum, and within that definition, criteria for identifying historically and technologically significant aircraft. As part of this effort, the Secretary and NASM Director should specifically consider the extent to which the museum should (1) include foreign aircraft in its collection and (2) duplicate aircraft contained in the collections of other federally funded museums; determine the relative priority of the aircraft contained in the NASM collection in the context of the definition of historically and technologically significant aircraft referred to in the above recommendation; determine the number and types of aircraft that should be retained, given the newly established criteria and actual and expected levels of funding and storage capacity; and deaccession those aircraft in the NASM collection that either do not meet the historically and technologically significant criteria or cannot be adequately stored and maintained with available resources. In pursuing the latter, additional consideration should also be given to second-party restorations and temporary loans of aircraft. We further recommend that the new Director of NASM develop a management plan for those aircraft that are to remain in the NASM collection that includes (1) whether and how exhibits will be developed for purposes of displaying the collection, (2) the extent to which each aircraft will be restored and when such restoration will be done, and (3) which organization will be responsible for monitoring each aircraft; develop a plan to increase the interaction of the curators and collections management staff; and further explore private funding alternatives and the feasibility of options to better care for aircraft, such as constructing a smaller, environmentally controlled facility to house those aircraft that will remain in the collection and are currently in inadequate storage facilities, as an initial phase of the Dulles Airport extension. We requested comments on a draft of this report from the Secretary of the Smithsonian or his designee. The Under Secretary provided comments dated August 17, 1995, which are in appendix III. The Smithsonian provided additional comments in an attachment to their August 17, 1995, letter. These comments have been discussed with Smithsonian officials and changes have been incorporated into this report where appropriate. The Under Secretary said that the Smithsonian recognizes a number of critical issues raised in our report and is working to address them. She said the greatest challenges are the lack of adequate storage space and inadequate resources to do all of the things that must be done. The Under Secretary disagreed with some of our findings and indirectly indicated disagreement with some of our proposed solutions to the collections care problems that we identified. Regarding our first recommendation, the Under Secretary indicated that (1) the Board of Regents will shortly review the scope of NASM’s mission; (2) NASM has rationales that define criteria to assess an object’s value to the collection; (3) when NASM was created, it was recognized that there might be some duplication between NASM and other museums; and (4) the overwhelming emphasis of the NASM collection is American. Our work indicated that the scope of NASM’s mission was unclear. The Air Force Historian, for example, questioned the wisdom of NASM having acquired a large collection of World War II Japanese aircraft. We still believe that the Smithsonian should consult with Congress to better define NASM’s mission and the criteria for identifying historically and technologically significant aircraft. We believe this consultation is necessary primarily because of the inadequate resources NASM has to take care of its current collection and the need to address that problem in a systematic fashion. Our recommendations to determine the relative priority of all aircraft in the collection in the context of the criteria for historical and technological significance and determining the number of aircraft that should be retained are parts of our proposed systematic solution to NASM’s storage problems. However, the Smithsonian did not address these recommendations in its comments. Also, while the Smithsonian’s Board of Regents may help define NASM’s mission, the board is not in a position to decide whether federally funded museums should duplicate military aircraft in their collections. Further, the budget climate has changed since NASM was created, and Congress now may want to look for opportunities to reduce or eliminate duplication. Although NASM officials indicated that the practical criteria for assessing priorities in collecting future space artifacts applies to aircraft acquisitions, the written rationale does not indicate that the criteria also apply to aircraft. Moreover, since most of NASM’s operating costs probably will be paid by federal funds, including the Dulles extension, we believe that Congress should be further involved in determining the role of the nation’s air and space museum. With respect to our recommendation to deaccession aircraft that cannot be adequately stored and maintained, the Under Secretary said the Smithsonian tries hard to deaccession aircraft and would welcome additional loans to other museums but cannot always find takers. As mentioned in this report, however, NASM has not developed a strategy to find other museums that might be interested in restoration loans. We believe that such a strategy would better inform other museums about restoration opportunities and could result in more restoration loans of NASM aircraft. Regarding our recommendation to develop a management plan for each aircraft in the collection, the Under Secretary indicated that curatorial responsibility has been assigned for each aircraft and that curators prioritize treatments as required to preserve aircraft. However, as discussed in this report, NASM does not prepare long-range plans for exhibits and restorations. We are not suggesting that an exhibit plan be developed for each aircraft in the collection. However, we are suggesting that NASM make determinations about the likelihood of whether aircraft will be exhibited at some point and the extent of restoration that would be needed to put the aircraft that are likely to be exhibited into exhibitable condition. Further, the lack of resources for collections management demonstrates an even greater need for a management plan. In response to our recommendation that NASM increase the interaction between the curators and collections management staff, the Under Secretary said that they already collaborate and that such collaboration has developed into a full and genuine partnership. Our interviews with several collections management staff (discussed in ch. 2) disclosed viewpoints that were substantially different from this. We believe that the substantial differences in opinion between the collections management staff and NASM management on this subject indicates that more attention should be devoted to determining what the condition is and how it should be resolved. Finally, the Under Secretary indicated that she disagreed with our last recommendation, which dealt with exploring the feasibility of constructing a smaller, initial storage facility at the Dulles extension to replace inadequate storage space. She said that display and educational space would be constructed by 2003 and that storage space would be built after this, as funds become available. We continue to believe that the need for storage space is more critical than the need for additional display and educational space, but we recognize the need to accommodate the differing views and desires of the various parties that may contribute to funding the extension.
Pursuant to a congressional request, GAO reviewed aircraft restoration at the Smithsonian Institution's National Air and Space Museum (NASM), focusing on: (1) the adequacy of facilities for preserving aircraft; and (2) options to improve NASM aircraft restoration efforts. GAO found that NASM: (1) commits fewer resources to aircraft restoration than it does to its other museum activities; (2) lacks adequate space to properly display or store restored aircraft; (3) must determine how to better preserve its aircraft collection despite its limited financial resources and lack of space; (4) needs to develop criteria for what constitutes historically significant aircraft and consider which aircraft should be kept in its collection; (5) could help preserve its aircraft collection, alleviate its storage capacity problems, and share its collection with the public by using second-party restorations; (6) expansion plans at Dulles Airport could help alleviate storage facility problems, but funding is uncertain and the extension could take years to complete; (7) could reduce short-term costs by limiting the new storage space to the same size as current storage facilities; and (8) does not have a management plan describing how each aircraft will be used in future exhibits, the extent that each aircraft will be restored, and who is responsible for monitoring the condition of each aircraft.
To determine whether DOD’s logistics migration efforts will meet its objectives for dramatic improvements in operational efficiency and effectiveness, we reviewed DOD’s policies and guidance for enterprise integration, corporate information management, and logistics migration system selection to ensure that information technologies are acquired, managed, and used in the most efficient and effective manner. Our assessment included analyzing DOD and prior GAO studies of the migration system strategy implementation and comparing DOD’s logistics information resources management practices to those followed by public and private organizations. We conducted our review from August 1995 through August 1996 in accordance with generally accepted government auditing standards. Details of our scope and methodology are contained in appendix I. The Deputy Under Secretary of Defense for Logistics provided written comments on a draft of this report. These comments are discussed at the end of this report and reprinted in appendix II. DOD has said that it must either improve effectiveness and efficiency dramatically or face real losses in capability to meet its mission objectives. As characterized by the Under Secretary of Defense for Acquisition and Technology (USD(A&T)), “Every logistics dollar expended on outdated systems, inefficient or excess capability and unneeded inventory is a dollar not available to build, modernize or maintain warfighting capability.” Defense logistics is the acquisition, management, distribution, and maintenance of the DOD materiel inventory used to provide replacement parts and other items for sustaining the readiness of ships, aircraft, tanks, and other weapon systems, as well as supporting military personnel. Logistics business operations include four major business activities—depot maintenance, distribution, materiel management, and transportation. DOD has reported that it spends over $44 billion annually maintaining, managing, distributing, and transporting a materiel inventory of $70 billion to support about $600 billion in mission assets. In October 1989, DOD established the CIM initiative to dramatically improve the way DOD conducts business, primarily by adopting best business practices used in the public and private sectors and building the automated information systems to support those improved practices. Originally, CIM focused on administrative areas such as civilian payroll, civilian personnel, and financial operations. DOD quickly broadened the initiative to encompass all DOD business areas, including the major logistics business activities. In January 1991, the Deputy Secretary of Defense endorsed a CIM implementation plan under which DOD would “reengineer,” that is thoroughly study and redesign, its business processes before it standardized its information systems. The Deputy Secretary believed this implementation strategy would emphasize the importance of improving the way DOD did business rather than merely standardizing old, inefficient business processes. In 1992, DOD projected that by focusing on business improvement, it could save as much as $36 billion by fiscal year 1997. DOD expected that improvements to its logistics operations would provide most—$28 billion—of these CIM savings. By early 1992, DOD had identified a number of process improvement projects. However, later in the year, the Acting DOD Comptroller, concerned that the current CIM implementation approach would not produce the cost savings needed to help offset significant budget reductions, recommended that focus be shifted from reengineering projects to the selection and implementation of standard information systems that could be used departmentwide. In November 1992, the Assistant Secretary of Defense for Production and Logistics—now called the Deputy Under Secretary of Defense for Logistics (DUSD(L))—issued the Logistics CIM Migration Master Plan. This plan established the selection of migration systems as the CIM implementation strategy within the logistics business activities. This “migration systems strategy” called for identifying the best operational logistics information systems and deploying them across all the services and defense agencies. This, DUSD(L) believed, would not only make logistics operations more efficient (areas would mirror the best in DOD) but these standard systems would also eliminate the cost of developing and supporting redundant systems designed to perform the same basic business functions. The strategy was designed to gradually migrate the military services and defense agencies from their multiple and often redundant information systems by (1) selecting and deploying migration systems—either single information systems or groups of information systems—in each logistics activity departmentwide, (2) improving current business processes and adding new functions to fill voids, and (3) combining the improved and new business processes with the new information systems to form a corporate logistics process. For example, Defense had identified over 200 large and numerous smaller depot maintenance and materiel management logistics systems with the goal of first reducing the number of these separate systems to as few as 32 and then using these systems to migrate toward a single logistics standard information system. DOD’s efforts to standardize and migrate information systems across the logistics areas of depot maintenance, materiel management, and transportation have not achieved expected results. Recently, DOD acknowledged that the deployment of standard information systems will not provide the dramatic improvements and cost reductions envisioned under the CIM initiative and is now emphasizing alternative ways for meeting these objectives. At the same time, however, it is continuing to deploy the information systems selected under the failed migration strategy. Our reviews of DOD migration system efforts for depot maintenance, materiel management, and transportation operations confirm that, to date, the strategy has failed to produce the dramatic gains in efficiency and effectiveness that DOD anticipated. More specifically: Our review of depot maintenance systems found that even if the migration effort was successfully implemented as envisioned, the planned depot maintenance standard system will not dramatically improve depot maintenance operations in DOD. First, under the CIM initiative, DOD planned to invest more than $1 billion to develop a depot maintenance standard system. However this would achieve less than 2.3 percent in reduced operational costs over a 10-year period. Such incremental improvement is significantly less than the order-of-magnitude improvements DOD has said could be achieved through the reengineering of business processes. Second, by postponing reengineering efforts until after developing the standard system, DOD may make it more difficult to reengineer in the future by increasing the risks of entrenching inefficient and ineffective work processes. Our review of DOD’s materiel management systems effort showed that the Department itself abandoned the migration strategy for this logistics area after it realized that the original goal for achieving a standard suite of integrated systems would require significantly more time and money than originally anticipated. For example, it would take as long as 2 years and as much as $100 million more than originally estimated to develop and deploy the Stock Control System—an application that would assist in requisition, receipt, and inventory processing. After spending over $700 million to migrate materiel management standard systems, there were no dramatic improvements in materiel management business processes; there were numerous development, scheduling, and contracting problems; and only one application of the Stock Control System had been deployed. That application was delivered basically untested, did not meet user functional requirements, and required much rework, debugging, and testing on the user’s part. Our review of Defense’s transportation migration efforts found that the current migration strategy in the transportation area will not ensure improvements are made that Defense recognizes are critical to the transportation function. A number of studies since 1950 have found that Defense traffic management processes are fragmented and inefficient, reflecting the conflicts and duplication inherent in a traffic management organizational structure consisting of multiple transportation agencies, each with separate service and modal responsibilities. In a 1994 DOD report, Reengineering the Defense Transportation System: The “Ought To Be” Defense Transportation System of the Year 2010, Defense officials maintained that nothing less than fundamental change would be required to achieve the quantum gains in savings and productivity needed to improve transportation business processes. We recently reported that it will be difficult for Defense to realize the benefits of its current reengineering efforts because these efforts do not concurrently focus on how the transportation organization structure should be redesigned. Moreover, we have also recently reported that even though reengineering efforts for transportation are underway, in making its migration system selections, Defense did not assess the impact that these operational changes would have on its system selections. DOD’s own studies have acknowledged that the implementation of the migration strategy has not worked. In May 1994, for example, DUSD(L) chartered a team with representatives from the services and Defense Logistics Agency to identify ways to improve the business practices of DOD inventory control points. The team, with industry assistance, found that the migration approach to standardizing and upgrading materiel management information systems was not workable and recommended that efforts to develop the Materiel Management Standard System be discontinued. Similarly, the Commission on Roles and Missions of the Armed Forces, in its logistics case studies, concluded that DOD’s efforts to standardize its management information systems under its CIM initiative would merely result in more compact, standardized versions of DOD’s traditional business operations. In late 1994, the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence (ASD(C3I)) acknowledged that DOD’s logistics migration systems strategy was seriously flawed. The Assistant Secretary said that, as opposed to the private sector which uses a very different approach, “DOD has virtually no chance of making high impact/quantum changes using the current approach.” In October 1995, the Under Secretary of Defense for Acquisition and Technology called for a revision to the standard migration systems strategy. Currently, for all business areas, DOD is trying alternative ways to achieve its CIM objectives of dramatic business improvement and cost reductions while, at the same time, continuing to deploy migration systems. To improve logistics operations, DOD is now emphasizing systems interoperability—the ability to exchange information between and among business activities—as a critical means for achieving dramatic improvements. To reduce operational costs, DOD is seeking to privatize and outsource certain functions—relying on the private sector to provide services that need not be performed by the Department. These three efforts make up a de facto DOD strategy for improving logistics systems. Each of the current efforts is discussed in more detail below. In calling for a revision to the migration strategy, the Under Secretary of Defense for Acquisition and Technology, in October 1995, stressed the importance of building interoperable systems and processes by relying on common operating environments and standard data exchange—elements which many migration systems do not have. DOD has directed business area managers to view their areas as part of the bigger DOD enterprise and develop information systems that are interoperable. Accordingly, business activities must be able to readily exchange information in order to provide senior managers with the comprehensive overview they need to make dramatic process improvements. In May 1995, the Commission on Roles and Missions of the Armed Services reported that more than 250,000 of DOD’s employees engage in commercial-type activities. To significantly reduce the costs of Defense operations, the Commission recommended that DOD rely primarily on the private sector for services that need not be performed by the government and reengineer those retained by the Department. Specifically addressing depot maintenance and materiel management activities, the Commission concluded that private contractors could provide essentially all of the services now conducted in government maintenance and inventory facilities more efficiently and effectively. Consistent with the Commission’s recommendations, the Deputy Secretary of Defense announced, in late 1995, that DOD would review opportunities to privatize a whole array of functions that, while important, do not directly contribute to the warfighter in the field. It has been reported that DOD spends about $125 billion each year performing commercial-type support functions, including those of depot maintenance, materiel management, and transportation. It has also been reported that, by privatizing only half of these support functions, DOD could save as much as of 20 percent, or $12 billion annually. We have, however, reported that under current conditions of excess depot capacity and limited private sector competition, these savings may not be realized. To achieve these savings, DOD established nine working groups, including one for depot maintenance and one for materiel management. According to materiel and distribution management working group officials, all business activities are actively being considered for privatization, including those the logistics migration systems are to support. They emphasized, however, that their reviews would not be complete until mid-1996 and resulting privatization actions would likely take a year or longer to accomplish at initial sites. They also stated that it could take longer than 5 years to fully implement any overall privatization strategy. Although DOD has acknowledged that its migration systems strategy has failed, it continues to deploy migration systems. Over the next several years, DOD plans to spend more than $7.7 billion to deploy these systems in addition to the $1.2 billion it reported having already spent. Table 1 identifies the costs to date and those expected to accrue that DOD reported in its fiscal year 1996-1997 biennial budget exhibits. We did not independently verify DOD’s budget estimates. We asked DOD logistics officials why they continued deployment of the logistics migration systems. They told us that the costs associated with stopping deployment of these systems and then restarting them would be significant. However, they had not performed an analysis to support this view. Also, officials cautioned that stopping migration system deployments could result in a lengthy delay in providing these systems to the services and Defense agencies. However, they acknowledged that immediate assessments are needed to ensure that the Defense investments in these systems were justified. We encourage DOD to explore alternative ways for improving logistics operations. However, we have two major concerns with its current efforts to develop systems interoperability, privatize commercial-type logistics activities, and deploy migration systems. First, Defense still has not completed the analyses required to determine that its logistics system deployment effort will yield a positive return on investment. Without this decision-making tool, Defense has no assurance that any efforts it makes to improve logistics systems will support its operational improvement and cost reduction objectives. Second, Defense has not yet sufficiently tied its improvement efforts to its overall business objectives through the use of strategic planning—a necessary step to ensure that the billions of dollars being invested in logistics improvement efforts will result in significant improvements in operations. Had it strategically planned for its system migration efforts, it may well have avoided costly strategy failures. We are currently reviewing DOD’s progress in its implementation of its overall logistics strategic plan. In continuing to deploy migration systems without addressing the fundamental problems associated with its selection and deployment of migration systems to date, DOD risks wasting a substantial amount of the additional $7.7 billion it plans to spend over the next few years. In developing systems for depot maintenance, materiel management, and transportation, Defense did not adequately ensure that the hundreds of millions of dollars it spent on development efforts would be cost-effective and beneficial. Defense requires that decisions to develop and deploy information systems be based on convincing, well-supported estimates of project costs, benefits, and risks. These directives establish a disciplined process for selecting the best projects based on comparisons of competing alternatives. Defense’s principal means for making these comparisons is a functional economic analysis. For each alternative, a functional economic analysis identifies resource, schedule, and other critical project characteristics and presents estimates of the costs, benefits, and risks. Once an alternative is chosen, the analysis becomes the basis for project approval. Any significant change in expected project costs, benefits, or risks requires reevaluation of the selected alternative. In our reviews of DOD’s efforts to implement the migration system strategy across its depot maintenance, materiel management, and transportation business activities, we found that DOD routinely selected and is deploying migration systems without (1) sufficiently analyzing their costs and benefits and (2) considering possible better commercial alternatives, such as reengineering, privatization, and outsourcing of business functions. Only recently has DOD began to consider such options. The following are the results of our previous reviews on DOD’s cost, benefit, and risk analyses. Our review of depot maintenance migration found that Defense selected the Depot Maintenance Standard System without analyzing the systems’ full development and deployment costs. Instead, it relied on a functional economic analysis of a previously proposed project—the Depot Maintenance Resource Planning system. This analysis understated Depot Maintenance Standard System project costs by at least $140 million by including costs for only some components, and it understated costs for the components it did include. Had Defense followed its own regulations and calculated investment returns on its transportation migration selections, it would have found—based on data available when the migration systems were selected—that two of the selected systems would lose money. The Air Loading Module (ALM) would lose $0.67 out of every dollar invested and the Cargo Movement Operations System (CMOS) would lose $0.04 out of every dollar invested. DOD’s analyses also did not include all costs associated with its evaluation of in-house systems. At least $18 million in costs were excluded—$16 million for an analysis of candidate migration systems and $2 million for maintaining migration system hardware. We also found that had DOD included these costs in its systems selection analyses, it would have found that the overall return on investment would have decreased. Our review of materiel migration system efforts showed that a complete economic analysis was never made for the migration strategy until July of 1995—nearly 3 years after the strategy began. Further, when Defense dramatically changed the course of materiel management systems development—abandoning the concept of developing a standard system and instead moving to incremental and individual deployments—it again did not set out to first assess risks, costs, and benefits before proceeding with such a change in strategy. Our reviews also found that major changes to operations or potentially better business practices were not assessed during the system selection process. Without a comparison of alternatives, DOD has no assurance that it has selected the most efficient and effective solution. For example, Defense selected a migration system to support its transportation of personal property and plans to spend $63 million over the next 5 years to implement it. Recently, however, DOD began actively seeking to privatize major components of this function. As a result, further spending on the migration system may be questionable since the system may no longer be needed. Similarly, DOD is deploying migration systems to support its materiel management operations without sufficient assessment of recent DOD initiatives focusing on privatizing materiel management operations or consolidating inventory control points. As a result, Defense may end up spending millions of dollars on systems for functions that it no longer performs or on inventory control points that are later consolidated. Our previous reports made a number of recommendations to help ensure that DOD selected the systems that offered the most effective solutions at least cost. These recommendations included preparing documentation that described system efforts and validated that they were the best alternatives for improving their respective business areas. Although DOD partially agreed with some of our recommendations, it essentially has continued to deploy systems without adequate economic analysis and full comparisons of available alternatives needed to ensure that it is making the best investment of its resources. Nevertheless, DOD is required to manage its information technology as investments. The Clinger-Cohen Act of 1996 was passed to stop government spending on systems projects that were found to be far exceeding their expected costs and yielding questionable benefits to mission improvements. Specifically, under the Clinger-Cohen Act, DOD is required to design and implement a process for selecting IT investments using such criteria as risk-adjusted return-on-investment and specific criteria for comparing and prioritizing alternative information system projects. If implemented properly, this process should provide a means for senior management to obtain timely information regarding progress in terms of costs, capability of the system to meet performance requirements, timeliness, and quality. Many of the problems we found in our past reviews of logistics systems efforts may well have been prevented had Defense employed strategic information planning before embarking on its CIM improvement efforts. Studies of private sector organizations show that strategic information planning is fundamental for achieving any significant level of performance improvement. Through the Clinger-Cohen Act, the Government Performance and Results Act (GPRA), and the Paperwork Reduction Act(PRA), the Congress has underscored the importance of strategic planning for the efficient and effective use of information technology. The Clinger-Cohen Act also requires that the investment process for information technology be integrated with processes for making budget, financial, and program management decisions. For Defense, such planning would establish a direct link between its business objectives and information technology use. In turn, this would have helped Defense focus on meeting the objective of dramatic improvement in operations rather than incremental change. Private industry and our studies of public and private organizations have identified that cohesive plans resulting from strategic information management—managing information and information technology to maximize improvements in business performance—are crucial for developing information systems that support substantial business improvement. For example, in early 1993, the International Business Machines (IBM) Consulting Group reported on its extensive case study of 17 exemplary companies chosen from an initial list of 200 companies in a wide range of industries. The IBM study found that the best companies had well-structured and well-explained information management plans that closely integrated with their business planning processes. Also, these plans aligned the use of information technology with business objectives to improve performance and deal effectively with changes in the business environment. The study also found that these companies did not invest in an information system until they clearly understood how and to what extent the proposed information system would enhance their business environment. Our studies of how leading private and public organizations have applied information technology to improve their performance have also found that organizations achieving substantially higher levels of performance had a disciplined, outcome-oriented, and integrated strategic information management process. For example, one organization that lacked a business vision—a definition of how the organization would work in the future—and an integrated strategic information management process, spent the majority of its resources maintaining existing, aging information systems. By integrating its planning, budgeting, and evaluation processes, the organization was able to shift about a third of its information systems personnel to reengineering projects. These new improvements in turn increased productivity and the quality of customer service. With GPRA, the Congress has recently underscored the importance of strategic planning by clarifying and expanding the requirement for a strategic information resources management plan first called for under the Paperwork Reduction Act of 1980. GPRA requires that agencies submit to the Office of Management and Budget, by September 1997, a strategic plan for their activities, including a comprehensive mission statement as well as goals and objectives for the agency’s functions and operations. The Clinger-Cohen Act supports the GPRA requirement of establishing goals for improving the efficiency and effectiveness of agency operations by improving the delivery of services to the public through more effective use of information technology. In late 1995, DOD proposed a new policy requiring the development of a DOD-wide strategic information resources management plan, with supplements for each DOD component, that would integrate the use of its information technology resources with its budgeting processes. While we support DOD’s efforts to establish a strategic information resources management planning process, the new policy, as proposed, does not require the DOD-wide plan and component supplements to be anchored in the Department’s business strategies. Without a direct link between its business objectives and information technology use, we believe that DOD risks developing a strategic information resources management (IRM) planning process that will become merely a reactive exercise to immediate priorities that are not adequately weighed against those of the future. We discussed our concern about DOD’s current efforts to make dramatic logistics improvements without a cohesive strategic information plan with the DUSD(L) and the Assistant Deputy Under Secretary of Defense for Logistics Business Systems and Technology. They stated that they had begun developing a strategic IRM plan that integrates business and systems strategies. This plan, they said, is needed to move from the migration systems strategy to a new business-oriented strategy and they agreed that migration systems that do not fit under this new strategy should be halted. DOD has acknowledged that its logistics migration strategy for improving its automated logistics information systems is flawed and has embarked on other efforts to develop interoperable systems and privatize commercial-type functions where it can save money. However, as it embarks on these other efforts, Defense is still not addressing the critical weaknesses associated with its previous strategy. By not doing so, it will continue to encounter unmanaged risks, low-value information technology projects, and too little emphasis on redesigning outmoded work processes. In essence, the new strategy will be just as risky as the previous strategy until Defense adopts the key ingredients needed to ensure successful information technology investments: (1) conducting thorough economic and risks analyses so that senior managers can begin examining trade-offs among competing proposals and prioritizing projects based on risk and return and (2) developing a strategic IRM plan defining how information technology activities will help accomplish agency missions. By adopting the framework for strategic planning mandated by the Government Performance and Results Act and managing its information technology projects as investments as called for in the Clinger-Cohen Act, DOD can begin delivering, at an acceptable cost, high-value information technology solutions for logistics operations. To ensure that DOD optimizes its use of information technology to achieve its logistics CIM goals of dramatic business process improvement and operational cost reduction, we recommend that the Secretary of Defense: Direct that immediate cost-benefit analyses of each logistics migration system be undertaken and halt deployment of those that (1) cannot be shown to have significant return-on-investment, (2) will not facilitate ongoing efforts to privatize logistics business functions, or (3) do not support efforts to achieve interoperability between and among business activities. Expedite development of a strategic information resources management plan that anchors DOD’s use of logistics information resources to its highest priority business objectives. The plan should conform with requirements established by the Government Performance and Results Act of 1993, the Paperwork Reduction Act of 1995, and the Clinger-Cohen Act of 1996. The Department of Defense provided written comments on a draft of this report. These comments are summarized below and reprinted in appendix II. The Deputy Under Secretary of Defense for Logistics generally agreed with our findings and conclusions. Defense also agreed with our recommendation that the Department develop a strategic information resources management plan for logistics and is currently developing such a plan. Defense disagreed with our recommendation to conduct cost-benefit analyses of current logistics development activities to ensure that those systems now being deployed will provide significant returns on investment. It contended that the strategic information resources plan being developed for the logistics area will create an environment that effectively controls the development and modernization of information systems. As part of this plan, Defense stated that overall DOD business objectives, mission requirements, and economic efficiency will be considered in making decisions to halt, proceed, or change the direction of the development/deployment process. We support DOD’s stated efforts to establish a more effective investment process for logistics information systems. However, we believe that as it develops its strategic plan, Defense should conduct cost-benefit analyses for its ongoing development efforts. As noted in our report, Defense still plans to spend more than $7.7 billion in the next few years developing and deploying migration systems. If it does not take steps to determine whether this significant investment is worthwhile, it will continue to risk wasting it as has been the case in the past. In the past, had cost-benefit analyses been correctly done for transportation, Defense would have found that some of its migration investments would have produced negative returns. Had a cost-benefit analysis been correctly done for depot maintenance, Defense would have found benefits to be far less than the dramatic improvements originally envisioned. Had Defense conducted cost-benefit analyses before it embarked on its materiel management efforts, it would have likely concluded that it should abandon the concept of developing standard systems before spending hundreds of millions of dollars on the effort. For the future, if Defense does not follow our recommendation to conduct cost-benefit analyses of its current projects, it will miss out on opportunities to identify more projects showing little promise for return and to redirect its investment to development efforts that more effectively support military missions. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate and House Committees on Appropriations, Senate Committee on Armed Services, the House Committee on Government Reform and Oversight, and the House Committee on National Security; the Chairman of the Senate Committee on Governmental Affairs; the Ranking Minority Member of the Subcommittee on Military Readiness of the House Committee on National Security; the Secretaries of Defense, Army, Navy, and Air Force; the Commandant Marine Corps; the Director of the Defense Logistics Agency; the Deputy Under Secretary of Defense for Logistics; and the Director of the Office of Management and Budget. Copies will be made available to others on request. If you have any questions about this report, please call me at (202) 512-6240, or Carl M. Urie, Assistant Director, at (202) 512-6231. Major contributors to this report are listed in appendix III. To determine whether DOD’s efforts to standardize its logistics migration systems will allow Defense to meet its business objectives of dramatically improving the efficiency and effectiveness of its logistics operations, we identified problems DOD has had implementing information systems selected under its migration strategy by analyzing prior GAO reports on DOD’s CIM efforts related to logistics business activities. Also, other ongoing GAO reviews provided the results of cost and benefit analyses, risk assessments, and interviews with program and technical officials responsible for implementing migration systems in the materiel management and transportation business areas. We evaluated the strategies, policies, and memoranda establishing DOD’s Enterprise Model, CIM initiative, and logistics migration information systems strategy to determine whether DOD’s migration systems strategy is consistent with DOD’s corporate business vision for balancing investments across the Department and optimizing its operational effectiveness. Also, we reviewed the findings of studies conducted by the Commission of Roles and Missions of the Armed Services and DOD for achieving dramatic increases in operational efficiency. To identify private and public organizations that have successfully managed information technology use to obtain superior business performance, we researched technical and business databases, reviewed literature by technology vendors, and reviewed prior GAO work and compared the private sector approach to DOD’s strategy in using information technology. Focusing on DOD’s new efforts to develop interoperable information systems emphasized in the enterprise model and to privatize and outsource commercial-type activities as recommended by the Commission on Roles and Missions, we compared DOD’s actions and plans for implementing depot maintenance, materiel management, and transportation migration systems with its business vision. Also, we compared the business activities DOD is considering privatizing with those the migration systems are to support. We compared the “best practices” of private and public organizations with DOD’s logistics migration strategy to identify actions that could increase the probability of achieving logistics business objectives and maximizing the return on technology investments. We interviewed senior Defense officials responsible for managing the CIM initiative, implementing the logistics migration strategy, and developing privatization plans. We also met with program and functional officials, including DOD managers responsible for deploying the depot maintenance and materiel management migration systems. Our work was performed from August 1995 through August 1996 in accordance with generally accepted government auditing standards. We performed our work primarily at the offices of the Deputy Under Secretary of Defense for Logistics in Washington, D.C.; the Joint Logistics Systems Center, Wright-Patterson Air Force Base, Ohio; and the Automated Systems Demonstration, Warner Robins Air Logistics Center, Georgia. James E. Hatcher, Core Group Manager Sanford F. Reigle, Evaluator-In-Charge Thomas C. Hewlett, Staff Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) efforts over the last 4 years to improve its information systems in the depot maintenance, materiel management, and transportation business areas, focusing on whether selected standard information systems will allow DOD to meet its business objective to dramatically improve the efficiency and effectiveness of its logistics operations. GAO found that: (1) DOD's continued deployment of information systems using a migration strategy for the depot maintenance, materiel management, and transportation business areas will not likely produce the significant improvements originally envisioned; (2) for the most part, these efforts, which were intended to lay the groundwork for future dramatic change by first standardizing information systems and the related processes throughout DOD, are merely increasing the risk that the new systems that are deployed will not be significantly better or less costly to operate than the hundreds of logistics information systems already in place; (3) DOD itself has acknowledged that its migration systems strategy will not provide necessary dramatic improvements and cost reductions and is now emphasizing alternative ways of improving logistics business operations, such as turning to the private sector to carry out major logistics functions; (4) at the same time, however, it is continuing to deploy information systems selected under the migration strategy that are linked to the very same business functions it wishes to make more efficient and economical through outsourcing and/or privatization; (5) while GAO is encouraged that DOD is exploring alternative ways to improve its logistics operations, it is concerned that the current path needlessly risks wasting a substantial amount of the more than $7.7 billion DOD plans to invest in improving automated logistics systems; (6) DOD still has not taken the fundamental steps necessary to ensure that the automated systems it continues to deploy will yield a positive return on investment; (7) even as DOD embarks on its new improvement efforts, it has not yet sufficiently tied these new efforts to its overall business objectives through the use of a strategic investment strategy to ensure that the billions of dollars will be wisely spent; (8) such planning would be in keeping with best private- and government-sector practices as well as with new legislation which underscores the importance of strategic information planning for the efficient and effective use of information technology; and (9) without addressing these concerns, DOD's new improvement efforts, like the failed standard migration strategy, will proceed with little chance of achieving the objectives originally envisioned for substantial operational improvements and reduction in costs.
The American Recovery and Reinvestment Act of 2009 (Recovery Act) required the Secretary of Education to provide grants to states that show promise in meeting the objectives of four broad education reform areas outlined in law. Education subsequently established the RTT grant fund to encourage states to reform their K-12 education systems and to reward states for improving certain student outcomes, such as making substantial gains in student achievement and improving high school graduation rates. The reforms contained in RTT were expected to help prepare students to graduate ready for college and career, and enable them to successfully compete with workers in other countries. Providing a high-quality education for every student is also vital to a strong U.S. economy. States competed for RTT grant funds based on reforms across the following four core reform areas: 1. Standards and assessments: adopting standards and assessments that prepare students to succeed in college and the workplace and to compete in the global market; 2. Data systems: building data systems that measure student academic growth and success and inform teachers and principals about how they can improve instruction; 3. Effective teachers and leaders: recruiting, developing, rewarding, and retaining effective teachers and principals, especially where they are needed most; and 4. School turnaround: turning around the lowest-achieving schools. Education awarded RTT grants to states in three phases, with award amounts ranging from approximately $17 million to $700 million (see appendix II for list of grantees and award amounts). States are generally required to sub-grant at least 50 percent of their RTT funds to school districts within their state that signed a Memorandum of Understanding stating their agreement to implement all or significant portions of the state’s RTT plan (participating districts). According to Education officials, providing a competitive grant with substantial funding to implement ambitious plans in the four core education reform areas was meant to encourage states to create the conditions for reform and achieve significant improvement in student outcomes (see fig. 1). The 4- year grant period began on the date funds were awarded to the state. Education officials stated that, of the Recovery Act funding used in 2010 for the first two phases of RTT, under federal law, any funds not obligated and liquidated by September 30, 2015, will no longer be available. Education made grants for the third phase of RTT from fiscal year 2011 funding, and officials told us that those funds must be liquidated by September 30, 2017. In awarding the RTT grants, Education used a peer review process to evaluate applications. Capacity to implement, scale up, and sustain RTT reforms was one of 19 primary criteria Education used to guide the selection of RTT grantees (see appendix III for a list of these criteria). Education did not provide a definition of capacity, but it provided guidance to peer reviewers on how to assess the specific criterion related to capacity: building strong statewide capacity to implement, scale up, and sustain proposed plans. Peer reviewers evaluated states on the extent to which they demonstrated that they would: (1) provide strong leadership and dedicated teams to implement the reforms; (2) support participating districts in implementing the reforms through a variety of activities, such as identifying and disseminating promising practices; (3) provide efficient and effective operations and processes for grant administration and performance measurement, among other functions; (4) use RTT funds to accomplish the state’s plans; and (5) use fiscal, political, and human capital resources to continue successful grant-funded reforms after RTT funds are no longer available. The capacity of grantees is a key issue in grants management that can affect program success. Capacity involves both maintaining appropriate resources and the ability to effectively manage those resources. For the purposes of this report, we defined capacity as the ability to successfully support, oversee, and implement reform efforts. It includes the following types of capacity: Organizational Capacity: degree of preparedness for grants management and implementation including having the appropriate leadership, management, and structure to efficiently and effectively implement the program and adapt as needed. Human Capital Capacity: the extent to which an organization has sufficient staff, knowledge, and technical skills to effectively meet its program goals. Financial Capacity: the extent to which an organization has sufficient financial resources to administer or implement the grant. Stakeholder Capacity: the extent to which an organization has sufficient support from its stakeholders, including their authority and commitment to execute reform efforts. We and other researchers have noted that capacity concerns may have important implications for competitive grants generally. For example, in 2011 and 2012, we reported on the School Improvement Grant program, another competitive grant awarded by Education, and found that human capital and stakeholder capacity issues influenced the implementation of In addition, a 2011 Journal of School Improvement Grant interventions. Federalism study demonstrated that applicant capacity is an important factor likely to influence how competitive grants are administered and that an applicant’s chances of winning competitive grants are strongly related to their capacity.capacity given relatively modest levels of investment in school improvement activities, as well as human resources, organization, and political challenges. In a January 2014 report, Education’s Inspector General identified common capacity-related causes for delays, such as changes in state leadership; staffing and organizational challenges at Other researchers also raised concerns about states’ state educational agencies; acquisitions issues; and stakeholder issues, particularly regarding the new evaluation systems. In 2011, Education established the Implementation and Support Unit, within the Office of the Deputy Secretary, to administer the RTT program. The purpose of the Implementation and Support Unit was to support the implementation of comprehensive reforms at the state level, pilot new approaches to strengthen and support state reforms, and act as a single point of contact for the Education programs that were housed in that office.of all aspects of RTT, including monitoring and technical assistance. The office was responsible for fiscal and programmatic oversight The Implementation and Support Unit established a program review process to monitor RTT states’ progress toward meeting their RTT goals and to tailor support based on individual state needs. The program review process emphasized outcomes and the quality of RTT implementation by states rather than focusing solely on a compliance-driven approach. Program officials and other staff in the Implementation and Support Unit were to work directly with states to understand their RTT plans and objectives, observe benchmarks, and monitor the quality of implementation. Education considered each state’s progress toward its goals and timelines, risk factors and strategies for addressing them, and the state’s own assessment of its quality of implementation, among other factors. In October 2014, Education established a new Office of State Support, which replaced the Implementation and Support Unit in the administration and oversight of RTT. Education provides technical assistance to RTT states via the Reform Support Network (RSN), which it established in 2010 through a 4-year, $43 million technical assistance contract with ICF International. The RSN is intended to work with RTT states to build capacity to implement and sustain reform efforts and achieve improvements in educational outcomes, identify and share promising and effective practices, as well as facilitate collaboration across states and among the many education stakeholders who implement and support state reform efforts. RSN is to provide RTT grantees one-on-one technical assistance that is tailored to the grantee’s RTT reform plans. RSN is to ensure that the state requesting individualized technical assistance receives the best available and relevant expertise by identifying specific experts that a state can contact for help. RSN also provides collective technical assistance to RTT states through communities of practice. Communities of practice use a variety of mechanisms to support states in meeting their RTT goals, including the use of working groups, publications, and various forms of direct technical assistance, such as webinars and individualized technical assistance. RSN established a capacity-building community of practice designed to strengthen the organizational capacity of RTT states and a working group to help states assess the sustainability of their reform initiatives and take action if needed. RTT accelerated reforms under way or spurred new reforms in all 19 states and in an estimated 81 percent of districts that were awarded RTT grants, according to states and districts we surveyed (see fig. 2 for district survey responses). For example, several state officials reported in their survey comments that their states began implementing reform activities— such as developing standards, longitudinal data systems, and new teacher evaluation systems—before they received RTT funds. In addition, 16 states reported that RTT provided the opportunity to accelerate or enhance existing reform plans or existing priorities. For example, one state official reported that RTT allowed their state to increase courses in science, technology, engineering, and math for students and teachers and provide professional development opportunities for pre-kindergarten teachers. In addition, RTT may have helped promote reforms not only within the 19 states that received RTT grants, but also in the states that applied but did not receive RTT funding. A 2014 Education study found that although RTT states implemented more reform activities in the four core reform areas than non-RTT states, many non-RTT states also adopted similar reforms. Specifically, many of the 47 states that applied for the grant had aligned their educational policies and actions to RTT’s four core education reform areas to develop a competitive application. For example, 43 states had adopted Common Core State Standards (Common Core) in both math and reading/English language arts in the 2010-11 school year. Adopting college- and career-ready standards was one of the 19 criteria peer reviewers used to select RTT grantees. Similarly, our prior work on RTT found that four states that applied for but were not awarded a RTT grant reported enacting new state legislation or making formal executive branch policy changes to be more competitive for RTT. Further, our 2011 report found that sharing information with all states carrying out initiatives similar to RTT initiatives can accelerate the pace and scope of reform efforts. Education developed RTT resources and subsequently made them available to all states on its website. In our survey of states and districts that received RTT funds, we asked officials to identify capacity challenges they faced in implementing and sustaining RTT and the level of difficulty associated with each challenge identified. In general, capacity issues posed a somewhat moderate level of challenge to states and currently participating districts implementing RTT, according to our survey of states and districts that received RTT funds. However, some states and districts described particular aspects of the four types of capacity—organizational, human capital, financial and stakeholder—as very or extremely challenging. For example, RTT states rated stakeholder capacity as the greatest challenge faced while implementing RTT reform initiatives. Overall, they rated this challenge as moderate; however, about one-quarter to one-third of RTT states reported that obtaining support from state legislatures, organizations that represent teachers and/or administrators, and district leaders was very or extremely challenging. Further, in implementing changes in two of the four core reform areas—standards and assessments and effective teachers and leaders—more than one-third of RTT states found stakeholder capacity to be very or extremely challenging. Although states were encouraged to show in their grant applications that they had garnered support for reforms from stakeholders, some states said that they had difficulty maintaining that support throughout the grant period. One state official told us that the state’s teachers’ union was seeking to reverse elements of their evaluation system linking teacher performance to student achievement, and the legislature was seeking to reverse the adoption of the Common Core—key elements of the state’s RTT application. RTT states rated organizational capacity as the second greatest challenge faced while implementing RTT. Although they rated this challenge as moderate overall, officials from 4 of the 19 states reported that consistency in leadership at the state educational agency was a specific aspect of organizational capacity that was very or extremely challenging. One state official we spoke with explained that frequent turnover at the superintendent level made implementing its teacher evaluation system difficult because they had to constantly educate new superintendents on how to use the evaluations to improve instruction. School districts reported facing different types of capacity challenges than did states. For example, school districts currently participating in RTT reforms reported that financial capacity was the most challenging. In each of the four core reform areas, about one-third of currently participating districts reported that financial capacity was very or extremely challenging to implementing RTT initiatives (see appendix IV). District officials we surveyed stated in their written comments that decreased state funding, the effects of the 2008 recession, and increasing enrollments affected their financial capacity to fund reform at the local level. While RTT grant funding to currently participating districts represented an estimated 1 to 2 percent of their budgets during each school year of the grant period, district officials told us that RTT funds were crucial to their ability to implement reforms. Districts also reported particular difficulties with human capital capacity— the second greatest challenge they faced implementing RTT. currently participating in RTT reported the most challenging aspect of human capital capacity was recruiting staff through competitive compensation, with an estimated 45 percent of districts reporting that doing so was very or extremely challenging. An estimated one-third of currently participating districts also cited retaining staff and having the appropriate number of staff among the most challenging aspects of human capital capacity, as well as issues related to Common Core implementation, such as having staff prepared to develop and/or implement curricula meeting the new standards. Human capital capacity is the extent to which an organization has sufficient staff, knowledge, and technical skills to effectively meet its program goals. States and districts reported taking various actions to build and increase their capacity overall throughout the grant period (see fig. 3). However, both indicated that human capital and financial capacity would be the most challenging to sustain after the RTT grant period ends. State and district officials we spoke with explained that these issues were inter- related; that is, staff shortages and skill gaps required continued funds for professional development. Throughout the grant period, more than half of the 19 states reported putting great or very great effort into building stakeholder capacity—the area that state officials cited as the most challenging—most frequently by consulting with organizations that represent teachers and/or administrators (17 states), consulting with district leadership (16 states), and building political relationships (15 states). Similarly, most states reported building organizational capacity—another area that presented great challenges as they implemented reforms—by, for example, establishing an RTT point of contact or office (18 states) and establishing communication mechanisms for RTT staff, such as group email lists (17 states). To a lesser extent, states reported that reorganizing an existing office (12 states) and appointing new RTT leadership (13 states) were also helpful in building organizational capacity. According to one state official we spoke with, the state reorganized its entire state educational agency into departments aligned with its RTT reforms. The official noted that the RTT grant helped the state fund the reorganization which, in turn, helped them mitigate capacity challenges throughout implementation. Another state official explained that the state focused on reorganizing how staff conduct their work by fostering collaboration among program officers. School districts—whose second greatest capacity challenge related to human capital—reported making great or very great effort to build human capital capacity for RTT reform by training existing staff (80 percent), expanding the responsibilities of current staff (74 percent), and shifting responsibilities among staff (64 percent). Similarly, all three district officials we spoke with in our follow-up interviews noted that efforts to build human capital capacity focused on training and shifting the roles of their current staff. One district official explained that they avoided funding new staff positions that they might not be able to retain after RTT funds ended. To build financial capacity, an estimated 23 percent of currently participating districts reported receiving supplemental funding from their state general fund. Additionally, an estimated 7 percent of districts reported receiving funds from foundations to build capacity. Despite their efforts, state and district officials reported that capacity struggles would likely remain once the RTT grant period ends. For both states and districts, financial capacity and human capital capacity represented the greatest challenges to sustaining reforms (see fig. 4). However, states and districts also reported planning to take various actions to help sustain their capacity for reform. All 19 states, as well as an estimated 84 percent of currently participating districts, indicated that retaining staff with requisite knowledge and skills is part of their plan to sustain RTT reform efforts. For example, one district official explained that they used a large portion of their RTT funds on training for teachers and administrators. Using the RTT funds for this purpose—as opposed to hiring many new staff—helped them build capacity and institutional knowledge that would be easier to sustain once the RTT funding ends. Additionally, 17 states indicated that modifying existing staff roles and responsibilities was the second most planned action to sustain RTT reforms. An estimated 72 percent of districts indicated that building institutional knowledge was their second most planned action to sustain RTT reforms. Rural school districts reported facing significantly greater challenges than urban districts in the standards and assessments and data systems core reform areas when implementing RTT, according to our survey results (see fig. 5). These survey results are consistent with our past work on the capacity challenges rural districts face. For example, in a 2013 report, we found that a rural district in New York faced unique difficulties implementing its teacher evaluation system because its small student population required some teachers to teach more than one subject, which made the evaluation process more complex and time-consuming. Similarly, our prior work on implementation of School Improvement Grants showed that rural districts faced difficulties because attracting and retaining high- quality teachers and implementing increased learning time requirements were difficult, in part due to higher transportation costs in rural areas. In addition, in responding to our survey, rural districts reported anticipating more difficulty than urban districts in sustaining all four types of capacity after the RTT grant period ends; and anticipated more difficulty than suburban districts in sustaining three of the four capacity types. For example, according to our survey, an estimated 40 percent of rural districts anticipated that human capital capacity would be very or extremely challenging in sustaining RTT reform efforts compared to 26 percent for urban and suburban districts (see fig. 6). One expert participating on our panel agreed, noting that rural districts would also face challenges sustaining reforms because constrained budgets and a lack of human capital capacity are often particularly challenging for rural districts. In addition, a rural district official told us that they have a small number of employees, and attracting and retaining skilled employees who can perform multiple work functions can be more difficult for them. The official also noted that recruiting staff is a challenge because rural districts are often also among the poorer districts and do not have the resources to implement large-scale hiring efforts. Although states and districts across the country likely face capacity challenges and resource limitations to some degree, research suggests that some rural districts—and states that have many rural districts—may be less likely to have the skills, knowledge, or expertise to overcome these challenges. For example, one 2013 report recommended that states may have to play a much more direct role in guiding school improvement in smaller, rural districts, where capacity is lacking.addition, a 2014 Education Office of Inspector General report indicated In this approach may be effective in reducing project delays and provided an example of a state that planned to help districts build capacity in order to better support low-performing schools in rural areas. Our prior work and other research demonstrate that states with many rural districts need additional supports in this area. Given that rural districts reported that they faced challenges implementing and sustaining reforms that were statistically significantly greater than urban and suburban districts, a greater understanding of these challenges could help Education provide more targeted support to rural districts. According to Education’s Handbook for the Discretionary Grant Process, Education is to provide technical assistance to grantees to help them achieve successful project outcomes. also required to hold grantees accountable for meeting the commitments made in their approved RTT applications. Education has recognized and reported on challenges facing rural districts. In addition, Education officials stated that they have supported RTT grantees and their rural districts through a series of convenings, work groups, publications, webinars, and individual technical assistance, and provided examples of these activities. However, we reviewed RSN’s technical assistance documents and found that most of the activities were not provided in the manner that RTT states reported finding most helpful—as discussed later in this report—nor were they tailored to helping states address the unique capacity challenges that rural districts reported facing in the reform areas identified in our survey. Unless Education provides assistance specifically designed to help states support their rural districts in addressing their capacity challenges in implementing and sustaining high-quality reform, states may not be able to help the districts that need it the most. U.S. Department of Education, Handbook for the Discretionary Grant Process, Handbook OS-01 (Washington, D.C.: January 2009). According to our state survey, individualized technical assistance provided by Education program officers was the most helpful resource when building capacity to implement and sustain reform plans (see fig. 7). This was consistent with the views of officials we interviewed in four RTT states, who described very positive interactions with their Education program officer. For example, state officials explained that the program officers practiced collaborative problem-solving and provided a significant amount of support to the state as it implemented reform activities. The next most helpful resources, according to our state survey, were technical assistance provided by other staff in the Implementation and Support Unit and RSN. One state official we spoke with noted that Implementation and Support Unit staff provided useful information on how other states were implementing their reform activities. An official from another state explained that the state is working closely with RSN to better understand how to work with its participating RTT districts to better leverage federal funding to improve student outcomes. As shown in figure 7, RSN’s communities of practice ranked fourth in terms of helpfulness to build capacity to implement and sustain RTT reform. According to state officials and one expert participating on our panel, these communities of practice encouraged collaboration across states, which has helped them leverage knowledge, talent, and resources, as well as facilitate the sharing of promising practices. Education officials observed similar value in RSN’s communities of practice, noting that through them, states had a forum in which to learn from each other and discuss RTT implementation issues. It is worth noting that state officials we interviewed commented that communities of practice may have been more helpful to states that were in the early stages of implementing RTT reforms. For example, one official noted that their state was farther along in implementing its teacher and principal evaluation system and school turnaround efforts and therefore did not gain as much from those communities of practice. State officials ranked RSN’s capacity-building community of practice and web-based resources from Education and RSN among the least helpful to states. Education officials similarly noted that while webinars were an easy way to disseminate information, they are likely not as valuable as other RTT resources because they are not as tailored to a particular state’s needs. Two experts participating on our panel noted that although an abundance of school reform-related information exists on websites, little is known about the effectiveness of the information. In December 2013, RSN published the results of an evaluation of its technical assistance activities that generally aligned with the results of our state survey. For example, according to RSN’s evaluation report, participants indicated they were satisfied with the quality of the support, the format and content of the technical assistance activities provided by RSN. Individualized technical assistance had the highest ratings because, according to the evaluation report, it was designed to address a state’s specific implementation challenges. In addition, participants in the RSN evaluation indicated that on average, technical assistance activities had a moderate effect on states’ ability to build capacity overall. The results of the RSN evaluation also showed that while webinars were useful for disseminating information to larger audiences and convening states on a regular basis, they received lower ratings than other forms of assistance. Our body of work on performance measures and evaluations has shown that successful organizations conduct periodic or ad hoc program evaluations to examine how well a program is working. These types of evaluations allow agencies to more closely examine aspects of program operations, factors in the program environment that may impede or contribute to its success, and the extent to which the program is operating as intended. Information from periodic reviews of RSN’s technical assistance efforts are an important factor in determining if adjustments are needed to help grantees meet their goals for education reform. State officials we surveyed also identified additional activities that Education could undertake that would better assist states with implementing RTT. Specifically, 10 of 19 states reported wanting ongoing professional development throughout the grant period, as opposed to during the early stages of the grant. Ten of 19 states reported wanting training to be provided in their respective states to make it more easily accessible, rather than having to travel to Washington, D.C. Further, 11 of 19 states reported wanting assistance identifying skilled contractors who could assist with reform efforts. Education officials stated that any assistance it provides to identify contractors cannot compromise the fairness and objectivity of the states’ procurement processes. Education officials also pointed out other legal challenges to identifying contractors, such as prohibitions against endorsements of private entities. However, Education officials stated they can assist grantees by, for example, helping them to develop objective criteria, analysis, or research regarding the qualifications of skilled contractors. They said they can also provide resource lists using objective criteria, as well as technical assistance in this area. In October 2014, Education created the Office of State Support to expand and sustain the collaborative approach to providing oversight and technical assistance that began under the Implementation and Support Unit. More specifically, the purpose of the Office of State Support is to design a coordinated approach across multiple Education programs to reduce redundancy and improve the efficiency and effectiveness of Education’s oversight efforts. The Office of State Support will provide states with one point of contact for multiple education programs that will provide support and technical assistance. The Office of State Support plans to establish advisory committees, involve staff from other education programs in decision making, and maintain close communication with staff from other education programs that have similar goals and activities as programs covered under the new office. Officials from the Office of State Support stated that the lessons learned from the RTT monitoring and technical assistance processes will inform their work in the new office for programs they oversee—many of which are helping states to facilitate comprehensive education reforms similar to those started under RTT. However, officials stated that they will need to eventually transition to a longer-range plan for monitoring and reconsider how they provide technical assistance because Education’s contract with RSN ends on June 30, 2015. Education officials noted that it was unlikely that the department would receive such a large amount of funding ($43 million) for technical assistance again. They explained that the type and extent of technical assistance efforts to states after the end of the RSN contract will, in turn, be dependent upon the funding available for that purpose. Lastly, they said that they will look to leverage existing technical assistance funds, such as those provided for the Comprehensive Centers program, to help increase state capacity to assist districts and schools. Education’s Handbook for the Discretionary Grant Process requires program offices to develop a monitoring and technical assistance plan for each grant program. In addition, according to Federal Standards for Internal Control, policies and procedures help ensure that necessary actions are taken to address risks to achieving the entity’s objectives. Education has a monitoring and technical assistance plan for RTT, which it has been using for the past four years and has continued to use during the transition from the Implementation and Support Unit to the Office of State Support. However, officials from the Office of State Support stated that they planned to establish coordinated technical assistance processes and procedures for all of the programs administered by the new office, while meeting the needs of the states and their particular initiatives. For example, they said they need to consider how to bring the various kinds of monitoring and technical assistance conducted by different program offices together to provide support for and make connections across programs, and be less burdensome for states. Officials stated that they formed a working group of staff from various Education program offices, including former Implementation and Support Unit staff, to help inform the new office’s coordinated technical assistance policies. However, officials noted that the working group was in the early stages of this process, and had not yet developed any draft policies or established a definitive deadline for accomplishing this task. Given the valuable technical assistance that RSN provided to states, and that Education has not determined the type or amount of technical assistance to be provided, there could be a gap in the type of support that Education can provide to states when the contract expires. Until the Office of State Support develops and finalizes policies and procedures that include support activities states identified as most helpful, Education runs the risk of not providing the most effective assistance to its grantees to help them successfully implement and sustain reform efforts. Our analysis of our expert panel transcript revealed key lessons that could help states and districts address their greatest capacity challenges and help sustain reforms after the RTT grant period ends. To address challenges with financial capacity, five of the 10 experts participating on our panel noted that federal formula grants are better suited than competitive grants for building and sustaining capacity because they provide a more stable funding source. Three experts stated that there are several ways that states and districts can leverage the funds they receive annually in formula grants to help sustain reforms. The Title I formula grant—designed to improve schools with high concentrations of students from low-income families—gives districts and schools flexibility to use federal funds to support instructional strategies and methods that best meet local needs. For example, schools where at least 40 percent of students are from low-income families may operate “school-wide” Title I programs, which allow schools to combine Title I funds with other federal, state, and local funds to improve the overall instructional program for all children in a school. In the 2012–2013 school year, approximately 40,632 schools, or 74 percent of all Title I schools, operated school-wide programs. Despite the large number of schools running a school-wide program, districts and schools may not be using the flexibilities to combine Title I funds with other federal funds to their fullest extent due, in part, to a lack of organizational capacity at the state and district levels. According to Education officials and two experts on our panel, states and districts are often uncertain about whether they are allowed to combine federal formula grants in new ways to support comprehensive reforms. For example, Education officials told us that historically, states and districts have used Title II funds—formula grants designed in part to increase student academic achievement through strategies such as improving teacher and principal quality—to reduce class size. However, according to Education’s guidance, states and districts could also choose to combine Title I and Title II funds to sustain reforms initiated under RTT, such as providing academic support coaches and financial incentives and rewards to attract and retain qualified and effective teachers to help low- performing schools. According to five experts on our panel, uncertainties about what is allowed may stem from lack of communication and coordination among the multiple federal education program and financial management offices, and because these offices are not always focused on helping states and districts better leverage their funds. GAO/AIMD-00-21.3.1. programs to support the four core reform areas. Further, in 2013, the Council of Chief State School Officers developed a toolkit for states to help clarify how districts and schools may spend K-12 federal formula grants. This toolkit encourages states to improve collaboration among offices supported by federal grants to help ensure they effectively leverage federal funds. Currently, Education is working with RSN to develop another toolkit for states and districts on ways to leverage federal formula grants to sustain educational reforms. Education officials could not provide definitive time frames for the release and dissemination of the toolkit, but noted that they are hoping to release it sometime in 2015. This toolkit, when finalized, may help states and districts better understand how to leverage their formula grants to sustain reform activities and help raise student achievement—a primary objective of education reform. Education officials and one expert participating on our panel also said that states and districts do not use funding flexibilities to their fullest extent because they have concerns about compliance with state audit requirements. Education officials explained that states and auditors may believe that federal law prohibits certain activities, even when the law and its implementation rules do not. Education officials told us they tried to address these uncertainties by issuing guidance to clarify how states and districts can leverage federal funds to support reforms. According to this guidance, states may use Title I funds to provide technical assistance to low-achieving schools, and districts may consolidate Title I, Title II, and IDEA funds in schools under the school-wide program to support comprehensive reforms by, for example, extending the school day or school year. However, Education officials said that there is still confusion about this issue, particularly among the audit community, and that it needs to provide new guidance to help auditors better understand allowable spending within federal formula grants, especially with Title I funds. However, it does not have a definitive plan for developing and implementing this guidance. Such guidance—when developed and fully implemented—may help auditors better understand funding flexibilities in existing formula grants and help states and districts fully leverage these flexibilities. Further, the pending reauthorization of ESEA also provides an opportunity to address these capacity issues. Education told us that it is exploring new options to help states and districts build capacity to implement comprehensive reforms, including increasing the portion of Title I grant funds that can be set aside for administrative purposes. Currently, two of the set-asides in the Title I program limit the maximum percentage of funds that can be set aside to support state administrative functions and districts’ school improvement activities. Specifically, ESEA requires that a state generally spend no more than 1 percent (or $400,000, whichever is greater) of its Title I funds on state administration and 4 percent on district school improvement activities. Education told us that the current portion of funds under the ESEA Title I grant that may be used for administrative functions may be inadequate given the range and complexity of state-level work in supporting effective implementation of local Title I projects. In its fiscal year 2016 budget proposal, the Administration proposed increasing the funds a state can spend on administration from 1 percent to 3 percent. According to Education officials, the trade-off, particularly in a tight fiscal environment, is that larger set-asides may reduce the portion of available funds that would transfer to districts and schools to implement programs. In the current Congress, the Student Success Act, which was reported out of the House Committee on Education and the Workforce, would make changes to both of these set-asides. To help address human capital and stakeholder capacity challenges, five experts on our panel noted the importance of fostering partnerships between a state and its districts, among districts within a state, and with non-governmental entities by, for example, convening groups of experts across the state to share expertise, solve problems, and share lessons learned to help leverage knowledge and talent. They further noted the potential for such a strategy to solve common challenges, such as how to develop effective strategies for evaluating teachers who teach subjects that are not assessed using standardized tests (e.g. foreign language or art). Universities with research and professional development institutes are another potential resource to help states and districts build and sustain human capital capacity. For example, one expert noted that strong relationships with higher education institutions and teacher unions are needed to revamp teacher, principal and superintendent training programs and teacher licensure requirements. Lastly, three panelists said that to maintain key stakeholder support for reforms, states need to show progress in meeting their established time frames for RTT reform, or increase student achievement. Three experts on our panel noted that competitive grants may be better suited than formula grants for spurring reforms and innovative approaches, but varying levels of capacity among states and districts raises concerns about their ability to win competitive grants and successfully implement large-scale education reforms. Research suggests that states’ capacity was an important variable in helping to predict who applied for RTT funds and which states scored well during the competition. In particular, a 2011 study found that states with quality standards and accountability procedures, and that had achieved overall student gains, were more likely to receive higher scores during the RTT grant competition. When making competitive grant awards in the future, Education officials told us they expect to look at demonstrated capacity as evidenced by a state’s performance under previous grants and may offer a competitive priority for previous success. To help states and districts that may be struggling in these areas, experts participating on our panel made four observations that they believe could be incorporated into the design of future competitive grants to help level the playing field between high- and low-capacity states and districts. Education has incorporated some of the observations into its competitive grant programs to varying degrees and pointed out some advantages and disadvantages of each. Observation 1: Allow joint applications so that states and districts with greater capacity can partner with those with less capacity. Education noted that it used this approach in recent grant competitions. Education encouraged states that opted to adopt a common set of college- and career-ready standards to form collaborative groups to apply for RTT assessment grants to develop assessments aligned with the new standards. A 2011 study proposed that such arrangements could help states with less capacity more easily benefit from the initiatives of ones with more capacity by helping them identify partners and providing them access to funds that may help valuable reforms gain traction. Education officials told us, however, that when they have allowed joint applications or consortia for some competitive grants, the complexity of implementing the grants increased because states have different procurement rules which take longer to navigate. Education officials also noted that these joint initiatives sometimes take longer to implement because states have to establish a framework for how they are going to coordinate. Observation 2: Staggering or “phasing” competitive grant funding to allow for varying capacity needs of grantees. Education officials told us that they have had mixed success using planning grants to allow grantees additional time to build capacity to implement plans. For example, Education used a two-phase strategy for awarding competitive grants under its Promise Neighborhoods grant program, including 1-year planning grants to organizations to enhance the grantees’ capacity and a separate competition for a 5- year implementation grant to organizations that demonstrated they were ready to implement their plans. However, we recently reported that Education did not communicate clearly to grantees about its expectations for the planning grants and the likelihood of receiving implementation grants. Education officials told us that they do not always have the authority to offer this feature, but they consider it where it is possible. Education officials told us that they are considering adding a planning year to the School Improvement Grant, which is federal money awarded to states that states, in turn, award to districts using a competitive process. Education officials told us that they believe that low-capacity districts could benefit from this approach, but noted that it will be important to emphasize their expectation that grantees use the planning year to build capacity to implement their reform plans. Observation 3: Allowing intermediary entities that often help coordinate or provide technical assistance to districts to apply for competitive grants. Education officials told us that they see a benefit to using partners such as nonprofit organizations to drive reform, noting, for example, that the Investing in Innovation program allows nonprofits to partner with school districts as part of the application process and throughout the grant period. Research supports such an approach as well. A 2011 RAND study examining the federal and state role in improving schools in 15 states found that although some states assumed primary responsibility for assisting low-performing schools, others relied on regional organizations, area education agencies, or intermediate school districts to fill this role.However, Education officials noted that applicant eligibility is generally defined in statute. Observation 4: Streamlining Education’s grant application processes to make it easier for states and districts with less capacity to apply. Education officials told us that one example of streamlining the grant process was allowing states that did not win an award in the first phase of a competition to revise the same application and resubmit for subsequent phases. Education adopted this strategy in the RTT grant competition. Another way to streamline the grant application process is by encouraging shorter applications. Education officials said it used this approach in a grant competition for the Investing in Innovation program. Education officials noted that, in general, one disadvantage to shorter applications is that there may not be sufficient detail in the applications to hold grantees accountable for implementing their plans. As Education’s technical assistance contract for RSN comes to a close, and it develops new processes for technical assistance under the new Office of State Support, it has an opportunity to apply the technical assistance that RTT states reported as most helpful, such as individualized technical assistance and professional development, to other grant programs that the office oversees. Such technical assistance could help states implement and sustain the comprehensive education reforms which will continue to be supported by other grant programs managed by the Office of State Support. In addition, because rural districts face unique challenges implementing and sustaining RTT reforms, focusing efforts to enhance Education’s understanding of the types of additional supports they may need could help these districts successfully implement and sustain their reform efforts, and ultimately improve student achievement. Further, as the RTT grant period comes to an end, RTT states may need to better leverage their federal formula grants to continue to support comprehensive reform in the absence of RTT funds. Education officials and other experts have emphasized the importance of leveraging existing funding flexibilities in education formula grants to help states implement and sustain large-scale reform efforts. However, concerns about a lack of communication between states’ program and financial management offices, as well as concerns about non-compliance with state and federal requirements may be limiting states’ willingness to use the funding flexibilities present in current law to develop and implement strategies tailored to their unique local needs. By taking actions to address these issues, Education can help states and districts better use their federal funding in the most effective way to improve student achievement and to support comprehensive school reform. To help ensure that states are better able to sustain RTT reforms and that Education can effectively support other grant programs managed by the Office of State Support, we recommend that the Secretary of Education direct the Office of State Support to fully implement and incorporate into its coordinated technical assistance policies and procedures the types of support that would be useful in sustaining RTT reforms and providing effective support to grantees in other programs supporting education reform that the Office of State Support oversees. These could include: providing individualized technical assistance to states, such as that currently provided by Education program officers; facilitating communities of practice to promote opportunities for collaboration across states; providing professional development (or training) throughout the grant period, as opposed to only during the early stages of the grant; making training more easily accessible by conducting training locally in their respective states, when possible; and to the extent permissible in the context of federal and state requirements and restrictions, exploring the possibility of assisting states in identifying skilled contractors to help implement reform efforts. To help states address capacity challenges as they sustain comprehensive education reforms similar to RTT, we recommend that the Secretary of Education direct the Office of State Support to take steps, such as: providing ongoing individualized technical assistance to states to help them target assistance to rural districts, particularly in the reform areas that were most challenging for rural districts; finalizing and disseminating guidance to be included in Education’s toolkit to help states leverage federal formula grants to sustain education reforms; and clarifying and improving understanding of how funding flexibilities in existing formula grants could be used to support education reform efforts to help states and the audit community address impediments to using formula grants in different ways. We provided a draft of this report to the Department of Education for comment. Education provided technical comments, which we incorporated into the report as appropriate. Education’s written comments are reproduced in appendix VI and summarized below. Education did not explicitly agree or disagree with our recommendations, but outlined steps to address many elements contained in them. It also provided additional information related to our findings and recommendations. In response to our first recommendation, Education stated that it shares our interest in supporting states as they sustain RTT reforms and supporting other grant programs under the Office of State Support through performance management and technical assistance. To this end, Education described plans to build on its generally successful RTT monitoring strategy to develop a consolidated technical assistance strategy for all programs under the auspices of the Office of State Support. We have added clarifying language in the body of the report to better reflect existing elements of the RTT monitoring and technical assistance plan. Education’s plan to provide coordinated policy development, performance management, technical assistance, and data analysis services through a structure intended to more effectively support the implementation of key reforms and provide individualized support is a positive step. These coordinated policies and procedures could continue to support RTT grantees as well as other grantees under other Office of State Support programs that have a role in helping states implement comprehensive education reforms. However, we continue to believe that until these policies are fully implemented, Education risks providing less effective support than it otherwise might. Further, as Education’s technical assistance contract for RSN comes to an end, we continue to believe that Education should take explicit steps to incorporate into its new consolidated assistance strategy for all programs under the Office of State Support the technical assistance activities that RTT grantees identified as being most helpful to them in sustaining their reforms. In addition, Education should incorporate those additional supports that states reported as desirable. We have clarified the intent of our recommendation accordingly. In response to our second recommendation, Education agreed that it is important to identify ways to help states target assistance to rural districts. Education stated, however, that the draft report does not adequately recognize the actions it has taken to support RTT grantees in rural states and districts, and provided a list of 17 activities it has undertaken through RSN to support rural areas. We acknowledge Education’s efforts to provide support to rural areas and have incorporated additional information in the draft report, as appropriate, to reflect this. However, in further reviewing these 17 activities, we found significant limitations and believe our overall finding and corresponding recommendation is still warranted. Specifically: Nearly all of the activities (16 of 17) were in the form of working groups, convenings, webinars, toolkits, and publications developed by the RSN, many of which were located on the RSN website. According to our survey of all 19 RTT states, web-based resources were among the least helpful to RTT states in building and sustaining the necessary capacity to implement reforms. Only one of the 17 activities provided individualized technical assistance which, according to our survey, was the most helpful form of assistance to RTT states. We realize that Education formed RSN to provide support in a variety of formats and agree that RSN has generally well supported RTT grantees. However, given the unique capacity challenges that rural districts face, we believe there is value in offering technical assistance tailored to the individual needs of rural areas. According to our generalizable survey of districts that received RTT funds, rural districts faced statistically significantly greater challenges than urban districts in implementing reforms in two areas: standards and assessments and data systems. However, 14 of the 17 RSN activities focused on the other two reform areas (school turnaround and effective teachers and leaders). RSN’s efforts to focus resources on assisting states in implementing RTT reforms are important ones, and we believe that many states and districts may have benefitted from these efforts. However, in order to best support states that are working to implement and sustain reforms in their rural districts, Education should target future support in the reform areas in which rural districts most struggled: standards and assessments and data systems. Accordingly, we modified our recommendation to clarify that Education should take steps to provide targeted assistance to states in those reform areas that we have identified as statistically significantly more challenging for rural districts. Many of the activities undertaken to support rural districts were conducted in 2012 and 2013 (6 of the 11 that included specific dates) when states and districts were fully engaged in implementing RTT reforms. However, our survey of districts that received RTT funds was deployed from June through September 2014, and the results indicated that rural districts continued to face challenges long after they would have availed themselves of these resources. Some of the activities (6 of 17) provided support that was not specifically tailored for rural districts; rather, it could be applied in rural, suburban, and urban school settings alike. We continue to believe that opportunities exist to help states better target support to rural districts. Without a better understanding of the unique capacity challenges that rural districts face, and a more focused approach to providing support, Education may not be able to help the states and districts that need it the most. Finally, Education recognized the importance of clarifying its guidance on the use of funding flexibilities and provided several examples of “Dear Colleague” letters it has provided to states. We referenced one of these letters in the draft of the report. We did not include the other two “Dear Colleague” letters (guidance related to leveraging federal funds to support school counselors and digital education) because they do not address the use of funding flexibilities in support of education reform initiatives, which was at the heart of our finding and corresponding recommendation. To address this apparent confusion we have clarified our recommendation accordingly. We noted in our report, and Education emphasized, that it is working with RSN to release new guidance in 2015 on ways to leverage federal grants to sustain educational reforms. However, as stated in our report, Education officials could not provide definitive time frames for the release and dissemination of the toolkit. We continue to believe that until this guidance is fully implemented, states and districts will continue to lack clarity on how to leverage their formula grants to sustain reform activities. We are sending copies of this report to the appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, the report will be available at no charge on GAO’s web site at http://www.gao.gov. If you or your staff should have any questions about this report, please contact me at (617) 788-0580 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. We framed our study of capacity challenges faced by states and districts implementing Race to the Top (RTT) reforms around three objectives: (1) What effect did RTT have on education reform, and what capacity challenges did states and districts face in implementing and sustaining RTT initiatives?; (2) How helpful was the assistance the U.S. Department of Education provided to states to build capacity to implement and sustain RTT reforms?; and (3) What lessons have been learned from RTT that could inform future education reform efforts? In addressing these objectives, we incorporated elements of “grounded foresight,” a methodological approach developed by GAO to examine future implications by identifying key trends, emerging challenges, and opportunities to inform government’s future role and responsibilities. According to GAO’s internal grounded foresight methodology paper, the heart of the proposed approach consists of three elements of grounding, designed to support GAO’s core values of integrity and reliability: (1) a strong factual-conceptual base, (2) one or more methods for discussing or anticipating the future, and (3) transparent communication of the outcomes. We developed a strong factual-conceptual base to assure that relevant trends and occurrences related to capacity issues and competitive grants are documented, recognized, and understood as part of the study. We reviewed and analyzed existing literature on capacity issues and competitive grants in K-12 education using GAO’s prospective We examined the features of RTT, and evaluation synthesis approach.reviewed findings from published reports to identify capacity challenges. We also deployed two web-based surveys of state educational agency and district officials; reviewed relevant federal laws, regulations, and guidance; and conducted interviews with a variety of federal, state, and local officials. We then convened a panel of experts who were knowledgeable about capacity issues and federal grants to obtain their views on the implications of capacity challenges on the sustainability of RTT reform efforts and potential future competitive grants. We made the results of the two web-based surveys publicly available to help ensure transparent communication of the capacity challenges states and districts reported facing. To obtain information on capacity challenges states faced in implementing and sustaining RTT reforms we conducted a web-based survey of RTT points of contact at each state educational agency in all 19 We conducted the survey from May through July 2014. In grantee states.the survey, we asked RTT states about their capacity to implement RTT efforts, the support received to do so, and efforts to build and sustain capacity for RTT reform, among other things. We received responses from all 19 RTT states for a 100 percent response rate. We reviewed state responses and followed up by telephone and e-mail with selected states for additional clarification and context. We also published survey responses in an e-publication supplemental to this report, RACE TO THE TOP: Survey of State Educational Agencies’ Capacity to Implement Reform (GAO-15-316SP, April 2015). To obtain information on capacity challenges districts faced in implementing and sustaining RTT reform efforts we conducted a web- based survey of a sample of district officials whose districts received RTT funds. We selected a stratified random sample of 643 from 3,251 school districts that received RTT funds from a population of 18,541 school districts in the 19 RTT states (see table 1). Although the focus was on districts that currently receive RTT funds, we also included districts that initially were participating in RTT but later decided to formally withdraw. We obtained data from Education’s National Center for Education Statistics, which maintains the Common Core of Data for public school districts, for the 2011-12 school year. Our sample allowed us to make estimates to all RTT districts and to subpopulations by urban status of the district. We conducted the school district survey from June through September 2014 and had a 76.7 percent final weighted response rate. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we expressed our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 6 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Unless otherwise noted, all percentage estimates in this report have confidence intervals within plus or minus 6 percentage points. For other estimates, the confidence intervals are presented along with the estimates themselves. In the survey, we asked questions about school districts’ capacity to implement RTT efforts, the support received to do so, and efforts to build and sustain capacity for RTT reform, among other things. We reviewed survey responses and followed up by telephone and e-mail with selected districts, as needed for additional clarification and to determine that their responses were complete, reasonable, and sufficiently reliable for the purposes of this report. We also published survey responses in an e-publication supplement to this report, RACE TO THE TOP: Survey of School Districts’ Capacity to Implement Reform (GAO-15-317SP, April 2015). The quality of the state and district survey data can be affected by nonsampling error, which includes variations in how respondents interpret questions, respondents’ willingness to offer accurate responses, and data collection and processing errors. To minimize such error, we included the following steps in developing the survey and in collecting and analyzing survey data. We pretested draft versions of the instrument with state educational agency officials in three states and officials in four districts to check the clarity of the questions and the flow and layout of the survey. On the basis of the pretests, we made revisions to both surveys. We contacted respondents to clarify any questions or responses where appropriate. Further, using a web-based survey and allowing state and district officials to enter their responses into an electronic instrument created an automatic record for each state and district and eliminated the errors associated with a manual data entry process. In addition, the programs used to analyze the survey data were independently verified to ensure the accuracy of this work. To obtain information on lessons learned from RTT that could inform future education reform efforts, we convened a group of knowledgeable individuals for an expert panel. In identifying the experts, we compiled a preliminary list of 15 individuals with research or professional experience related to RTT reforms, state and district capacity, federal grant making, and state or federal education policy. These experts represented the following entities: state educational agencies, school districts, education associations, academia, and education think tanks. They also included a former Education official and a representative from Education’s Office of Inspector General. We identified a state educational agency official based on participation in RTT and the state’s proximity to Washington, D.C. where the panel was convened. To obtain a different local perspective, we selected a school district official from a different state. In addition, we selected the school district based on proximity to Washington, D.C. and the extent to which the district had completed questions in our district survey. An external expert who conducted extensive research on K-12 education and federal policy vetted our initial list of panelists. We used feedback from this expert, along with biographical information about the experts, to determine which experts would be invited to participate. The resulting 10 experts participated in a 1-day panel focused on capacity challenges and their implications for RTT reforms and future competitive grants (see appendix V for list of participants). Each panelist completed a questionnaire to document any conflicts of interest. This information was not used to determine the qualification of the expert for the panel, but to ensure that we were aware of circumstances that could be viewed by others as affecting the expert’s point of view on these topics. We developed discussion topics and questions for the panelists based on information gathered from the surveys, interviews, and academic literature. A contractor recorded the panel and transcribed the discussion. We performed a content analysis of the transcript of the panel discussion to develop common themes among the experts on lessons learned from RTT that could help sustain reform efforts, inform the design or implementation of future education competitive grants, and inform future education reform efforts. We tallied responses for each panelist who commented on those themes. This analysis was independently verified to ensure the accuracy of this work. For all three objectives, we reviewed relevant federal laws, regulations, and guidance—including federal internal control standards and Education’s Handbook for the Discretionary Grant Process—and interviewed federal, state, and district officials and other experts regarding capacity to implement and sustain RTT reforms. We reviewed RTT applications to identify commitments states made to build capacity to implement RTT initiatives. To identify actions taken to build capacity, we compared the states’ commitments to information provided in their progress reports for school year 2012-2013. We also reviewed information on Education’s efforts to assist states with building capacity, such as guidance, technical assistance, webinars, and other information on the RTT website. We interviewed federal officials from the Implementation and Support Unit in Education’s Office of the Deputy Secretary and staff from the newly established Office of State Support. In addition, we conducted interviews with a variety of interested parties, such as educational organizations, researchers, and university professors. For example, we met with representatives from the American Association of School Administrators, the Council of Chief State School Officers, and the Center on Reinventing Public Education, among others. We also conducted follow-up interviews with officials in four state educational agencies and three districts to obtain more detailed information and illustrative examples. We selected these state and district officials based on their responses to our surveys and representation across award phase. We conducted this performance audit from November 2013 to April 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix IV: Capacity Challenges by Race to the Top Reform Area and Type of Capacity, as Reported by States and Estimated by Districts STATES (Percent and Number) DISTRICTS (Estimated Percentage) 11% (2) 17% (3) 11% (2) 39% (7) 24% (4) 18% (3) Effective Teachers and Leaders Organizational 24% (4) 24% (4) 22% (4) 11% (2) 22%(4) 44% (8) 22% 33% 16% 40% (6) 27% (4) 33% (5) 33% (5) In addition to the contact named above, Elizabeth Morrison (Assistant Director), Jamila Jones Kennedy (Analyst-in-Charge), Sheranda Campbell, Kathryn O’Dea Lamas, Amanda Parker, and Stacy Spence made significant contributions to this report. Assistance, expertise, and guidance were provided by David Chrisinger, Nancy Donovan, Alexander Galuten, Catherine Hurley, Jill Lacey, Jean McSween, Mark Ramage, Walter Vance, and Mimi Nguyen.
Education created RTT under the American Recovery and Reinvestment Act of 2009. From 2010 through 2011, Education awarded $4 billion in competitive grant funds to 19 states to reform core areas of K-12 education. RTT states also committed to building capacity to implement and sustain reforms. GAO and others previously reported that capacity challenges had adversely affected RTT implementation and could hinder efforts to sustain the reforms. GAO was asked to further examine these challenges. This report examines: (1) the effect of RTT on reform and capacity challenges states and districts faced, (2) how helpful Education's assistance was to states in building and sustaining capacity, and (3) lessons learned that could inform future reform efforts. GAO surveyed all 19 RTT states and a generalizable sample of RTT districts; held an expert panel; reviewed RTT applications, progress reports, relevant federal laws and regulations, and literature; and interviewed officials from seven selected states and districts, chosen based on survey responses. GAO selected expert panelists based on research or experience with RTT, capacity issues, and federal grants. The Department of Education's (Education) Race to the Top (RTT) program encouraged states to reform their K-12 educational systems, but states and districts faced various capacity challenges in implementing the reforms. RTT accelerated education reforms underway and spurred new reforms in all 19 RTT states and in an estimated 81 percent of districts, according to GAO's surveys of RTT grantees and districts that received RTT funds. At the same time, states and districts noted various challenges to their capacity to successfully support, oversee, and implement these reform efforts. For example, about one-quarter to one-third of RTT states reported that their greatest challenges involved obtaining support from stakeholders such as teacher organizations. In contrast, districts primarily reported that their greatest challenges involved financial and human capital capacity, especially with competitive compensation and standards and assessments. Additionally, rural districts reported facing greater challenges than urban and suburban districts. Education is to assist grantees in achieving successful project outcomes according to its grants handbook, while holding them accountable for their RTT reform plans. Yet, GAO found no specific activities tailored to rural needs in areas grantees identified as most challenging. A better understanding of the capacity challenges rural districts face could help Education better target its technical assistance to districts that need it the most. In response to GAO's survey, many RTT states reported that technical assistance from Education officials and its contractor was more helpful than other RTT resources, such as web-based materials. Ten states also reported they would benefit from additional support in areas such as training and professional development. Education created a new office to oversee and provide coordinated support to RTT and other programs, and intends to develop office-wide coordinated technical assistance policies. Federal internal control standards note that adequate policies help ensure that actions are taken to address risks to achieving an agency's objectives. However, Education has not determined the type or amount of technical assistance to be provided and its policies are still being developed. RTT's $43 million technical assistance contract ends in June 2015, which may create a gap in assistance to states. Unless Education focuses on technical assistance activities that states found most useful, it risks providing ineffective assistance to programs supporting these education reforms. GAO's panel of RTT and grant experts identified key lessons learned, such as leveraging existing funding flexibilities under federal formula grants, to help address capacity needs and sustain reforms when RTT ends in September 2015. Districts and schools may not, however, be using these flexibilities to their fullest extent, in part because of uncertainty about what is allowed under federal requirements. Federal internal control standards state that information should be communicated in a form that enables an agency to achieve its objectives. Education lacks time frames for finalizing and disseminating new guidance for states to clarify federal formula grant flexibilities; and recognizes the need for, but has not developed guidance to help auditors better understand these flexibilities. Such guidance, when finalized, may help states and districts sustain education reforms, thereby raising student achievement – a primary objective of reform. GAO recommends that Education incorporate into its coordinated policies technical assistance grantees found most useful, target assistance to rural districts, and issue guidance to help states and auditors with funding flexibilities. Education did not explicitly agree or disagree with GAO's recommendations, but outlined steps to address many aspects of them. To view the e-supplements online, click:
Access to behavioral health treatment—services and prescription drugs to address behavioral health conditions—is important because of the harmful consequences of untreated conditions, which may result in worsening health, increased medical costs, negative effects on employment and workplace performance, strain on personal and social relationships, and possible incarceration. Behavioral health treatment can help individuals reduce their symptoms and improve their ability to function. However, research suggests that a substantial number of individuals with behavioral health conditions may not receive any treatment or less than the recommended treatment, even among those with serious conditions. For example, in 2013, SAMHSA estimated that there were 3.9 million adults aged 18 or older with a serious mental illness who perceived an unmet need for mental health care within the last 12 months. This number includes an estimated 1.3 million adults with a serious mental illness who did not receive any mental health services. One potential barrier to accessing treatment is shortages of qualified behavioral health professionals, particularly in rural areas. SAMHSA noted that more than three-quarters of counties in the United States have a serious shortage of mental health professionals. Behavioral health treatment includes an array of options ranging from less to more intensive, and may include prevention services, screening and assessment, outpatient treatment, inpatient treatment, and emergency services for mental health and substance use conditions. Prescription drugs may also be included as part of treatment for either substance use or mental health conditions. See table 1 for information on select behavioral health treatments. In addition to these treatments, other supportive services exist for behavioral health conditions that are designed to help individuals manage mental health or substance use conditions and maximize their potential to live independently in the community. These supportive services are multidimensional—intended to address not only health conditions, but also employment, housing, and other issues. For example, they include recovery housing—supervised, short-term housing for individuals with substance use conditions or co-occurring mental and substance use conditions that can be used after inpatient or residential treatment. The Centers for Medicare & Medicaid Services (CMS)—a federal agency within the Department of Health and Human Services (HHS)—and states jointly administer the Medicaid program, which finances health care, including behavioral health care, for low-income individuals and families. States have flexibility within broad federal parameters for designing and implementing their Medicaid programs. States may use Medicaid waivers—which allow states to set aside certain, otherwise applicable federal Medicaid requirements—to provide health care, including behavioral health treatment, to individuals who would not otherwise be eligible for those benefits under the state’s Medicaid program. For example, states may use waiver programs to target residents in a geographic region or to target individuals with particular needs, such as those with serious mental illness. States may also choose different delivery systems to provide benefits including behavioral health treatment to Medicaid enrollees, such as FFS or managed care. Some states with managed care delivery systems may elect to “carve out” behavioral health benefits, i.e., contract for them separately from physical health care benefits. For example, some states contract with limited benefit plans, which are managed care arrangements designed to provide a narrowly defined set of services. Similarly, states with FFS delivery systems may choose to contract with separate companies to administer behavioral health benefits than those administering physical health care benefits.Our previous work has noted that while using a separate plan to provide mental health services may control costs, it can also increase the risk that treatment for physical and mental health conditions will not be coordinated. A variety of sources provide funding for behavioral health treatment in public programs. Medicaid is the largest source of funding for behavioral health treatment, with spending projected to be about $60 billion in 2014. Another significant source of revenue for state BHAs is state general revenues. In contrast to Medicaid, for which payment of benefits to eligible persons is required by law, state general funding for the treatment of uninsured and underinsured residents is discretionary. The extent to which state-funded treatment is provided may depend on the availability of funding. States may also use SAMHSA mental health and substance use block grants to design and support a variety of treatments for individuals with behavioral health conditions. See figure 1 for information on sources of state BHA revenues for mental health in 2013. (Similar figures for substance use were not available.) The number of states that have expanded Medicaid includes the District of Columbia, which we refer to as a state of for the purposes of this report. State BHAs are the agencies responsible for planning and operating state behavioral health systems, and they play a significant role in administering, funding, and providing behavioral health treatment. State BHAs manage behavioral-health-related federal grants and may work with other state agencies—such as state Medicaid agencies—to identify and treat mental health and substance use conditions. State BHAs may contract directly with providers to deliver behavioral health treatments or may contract with county or city governments, which are then responsible for the delivery of treatments within their local areas. State BHAs may also play a role in providing Medicaid enrollees with wraparound services—that is, services that state Medicaid programs do not cover, but that may aid in recovery, such as supportive housing. Nationwide, estimates using data from 2008 to 2013 indicated that of 17.8 million low-income, uninsured adults, approximately 3 million (17 percent) had a behavioral health condition prior to the Medicaid expansion in 2014. Specifically, about 1 million low-income, uninsured adults (5.8 percent) were estimated to have a serious mental illness, while nearly 2.3 million low-income, uninsured adults (12.8 percent) were estimated to have a substance use condition. Underlying these national estimates was considerable variation at the state level. In particular, the percentage of low-income, uninsured adults with a behavioral health condition ranged from 6.9 percent to 27.5 percent. Similarly, the percentage of low-income uninsured adults with serious mental illness ranged from 1.3 percent to 13 percent, while the percentage with a substance use condition ranged from 5.9 percent to 23.5 percent. See figure 3 for the states with the highest and lowest estimated percentages of low-income, uninsured adults with a serious mental illness or substance use condition. See appendix I for state-by-state estimates. Of the 3 million low-income, uninsured adults estimated to have a behavioral health condition, nearly half—approximately 1.4 million people, or about 49 percent—lived in the 22 states that had not expanded Medicaid as of February 2015, compared with the approximately 1.5 million people in the remaining 29 states that had expanded Medicaid. The estimated prevalence of behavioral health conditions overall among low-income, uninsured adults was about 17 percent, on average, in both expansion and non-expansion states. State BHAs in the non-expansion states we examined offered a variety of behavioral health treatments for low-income, uninsured adults. These states identified priority populations to focus care on adults with the most serious conditions and used waiting lists for those with more modest behavioral health needs. The non-expansion states we examined—Missouri, Montana, Texas, and Wisconsin—offered a range of behavioral health treatments— inpatient and outpatient services and prescription drugs—for low-income, uninsured adults. These states used community mental health centers, state institutions, and contracts with providers to deliver treatments, and used a variety of sources, such as state general funds, federal block grants, and Medicaid to fund them. For mental health and substance use conditions, outpatient services that these states offered included evaluation and assessment, visits with medical providers, and individual, family, and group counseling. Treatments also included emergency care, and in most of these states, partial hospitalizations and inpatient psychiatric care for mental health conditions. For substance use conditions, these states also offered detoxification and residential treatment. These states generally made prescription drugs available to uninsured adults as part of the treatment for behavioral health conditions. For example, Missouri, Texas, and Wisconsin included medication- assisted treatment for substance use conditions, and all four of the selected non-expansion states offered prescription drugs for mental health conditions. In addition to treatment, the non-expansion states also offered some supportive services, such as peer support or housing services, for uninsured adults. For two of the states we examined—Wisconsin and Texas—the availability of specific services for behavioral health may vary throughout the state. In particular, the responsibility for administering and providing treatment was divided between the state BHA and local entities, which receive both state and local funding to provide behavioral health treatment. funding for behavioral health, which they can use to fund services of their choosing. The Wisconsin BHA identified a core list of 30 services for behavioral health that it promotes and encourages counties to provide, but the official noted that it may be difficult for a single county to provide all of the services on the list. For example, the Wisconsin BHA reported that about a quarter of counties provided medication-assisted treatment for individuals with substance use conditions in 2013. As another example, Texas offers opportunities for local mental health authorities to compete for funding for specific types of services, such as housing. Local entities are counties for Wisconsin and local mental health authorities in Texas. In contrast, the state BHA is solely responsible for administering behavioral health treatment in Missouri and Montana. coverage. In addition, Wisconsin obtained a Medicaid waiver effective January 2014 that made certain childless adults up to 100 percent of the FPL eligible for Medicaid, which gave them access to Medicaid-covered services and prescription drugs, including behavioral health treatments. Officials from the non-expansion states we examined noted initiatives relevant to low-income, uninsured adults, such as improving crisis response and coordinating care for individuals involved with law enforcement. Texas officials noted that 24 of the 33 local mental health authorities have a facility-based crisis option to treat individuals experiencing a crisis and that they would like to provide the remaining local mental health authorities with similar facilities, which are intended to avoid inpatient care. In Wisconsin, behavioral health treatment includes mobile crisis services to respond to individuals in the community experiencing a crisis. A Wisconsin official told us that there were legislative efforts underway to expand these services, particularly in rural areas. Missouri has hired community mental health liaisons to facilitate access to behavioral health services for individuals who are in frequent contact with law enforcement. The selected non-expansion states established priority populations for providing behavioral health treatment to those with the most severe behavioral health needs. For the states we examined, priority populations for mental health treatment included individuals with serious mental illness and those presenting in crisis. Similarly, all the non-expansion states we examined identified priority populations for receiving treatment for substance use conditions. Specifically, pregnant women and individuals abusing drugs intravenously were among the priority groups that the states identified to receive treatment. As part of setting priorities for those with the most serious behavioral health needs, the non-expansion states included specific eligibility requirements based on diagnosis or impairment, in addition to financial status, for behavioral health treatment for the uninsured. In Montana, individuals aged 18 to 64 diagnosed with a severe, disabling mental illness, and incomes up to 150 percent of the FPL may qualify for the state-funded Mental Health Services Plan. Montana officials told us that their Mental Health Services Plan does not provide treatment to individuals with more moderate behavioral health needs, but that these individuals may get some treatment through community-based “drop-in” centers. In Texas, local mental health authorities are required to provide services to adults with diagnoses of schizophrenia, bipolar disorder, or clinically severe depression, and may, to the extent feasible, provide services to adults experiencing significant functional impairment due to other diagnoses. Individuals who are not members of the identified priority groups are generally not eligible to receive treatment. Three of the states we examined maintained waiting lists for individuals with more modest needs for behavioral health treatment. Texas officials said that they triage individuals eligible for behavioral health treatment, and those with less urgent needs may have to wait. In some cases, individuals may receive a lower level of care than recommended while waiting for treatment due to resource limitations. For example, an individual might receive medication-related services and crisis services as needed, but not recommended rehabilitation services. Texas officials told us that there were over 5,000 individuals waiting for behavioral health treatment as of February 2013, although they were able to move most individuals off waiting lists when they received additional state funding for fiscal years 2014 and 2015. They described this additional funding as “historic,” and they reduced the number of individuals waiting to fewer than 300 as of May 2014. In addition to reducing the waiting list, Texas moved 1,435 adults from lower levels of care to more appropriate levels in 2014. A Wisconsin official told us if county agencies run out of funding, they are permitted to establish waiting lists or may only serve clients with Medicaid coverage. The official said there were 1,656 individuals waiting for substance use treatment and 242 individuals waiting for a specific mental health service in 2013, prior to Wisconsin extending Medicaid coverage to certain low-income adults through a Medicaid waiver.all services if they run out of funding, but must always provide emergency care. The official said that county agencies do not have to provide Missouri officials said there were 3,723 individuals on the waiting list for substance use treatment as of January 2015. Missouri state BHA officials noted that Missouri does not maintain a waiting list for mental health services. Selected states generally managed behavioral health benefits for newly eligible Medicaid enrollees separately from physical benefits through carve-outs or separate contracts. Health plans for these enrollees were generally aligned with Medicaid state plans, resulting in comparable behavioral health benefits for newly eligible and existing Medicaid enrollees. According to state officials, expanding Medicaid has increased the availability of behavioral health treatment, although some access concerns continue. The expansion states we examined generally managed behavioral health benefits separately from other benefits through carve-outs or separate contracts. Four of the six states included in our study—Connecticut, Maryland, Michigan, and West Virginia—explicitly carved out or contracted for the administration of behavioral health services or prescription drugs separately from other services and drugs. For example, in Maryland, specialty mental health services have been carved out of its contracts with managed care organizations (MCO) since 1997 and are paid for on an FFS basis. Michigan carved behavioral health services out of its MCO contracts and moved them to a limited benefit plan in 1998. Connecticut, which has an FFS delivery system for newly eligible enrollees, contracted with a behavioral health benefits manager to administer behavioral health services. The other two states contracted with MCOs to provide both physical and behavioral health coverage, but several of these MCOs chose to subcontract with behavioral health benefits managers. See table 2 for information on the expansion states’ coverage designs for behavioral health services and prescription drugs. State officials cited various reasons for separately managing behavioral health benefits, including concerns about access, ensuring appropriate expertise, and state law. Maryland officials told us they chose to carve out mental health services and pay for them on an FFS basis through a behavioral health benefits manager due to concerns about beneficiary access under managed care, particularly for more intensive services generally not covered by commercial insurance plans. Maryland also separately carved out mental health prescription drugs on an FFS basis, which officials said was so that the state could align policies for these drugs with the behavioral health benefits manager administering the mental health services carve-out. In Kentucky, three of the five Medicaid MCOs subcontracted behavioral health benefits to a behavioral health benefits manager. Kentucky officials told us that one of the MCOs decided to subcontract these benefits due to a lack of expertise in managing behavioral health prescription drugs. Michigan and Connecticut officials told us that state laws prohibit their Medicaid programs from using certain utilization management techniques for some types of behavioral health prescription drugs. Michigan officials told us that given the lack of utilization management tools available, they decided to pay for behavioral health prescription drugs on an FFS basis rather than to include these drugs in the state’s limited benefit plan contracts. Providers have raised concerns about managing behavioral health benefits separately from medical benefits, and some states reported making efforts to make sure care is coordinated. Behavioral health physician groups we spoke with told us that paying for physical and behavioral health care separately makes it difficult to assess the total cost of care for individuals with behavioral health conditions, and does not provide adequate incentives to make investments in one type of care that may reduce costs for another type of care. For example, provider groups said that lack of investment in substance use services could lead to additional costs for emergency medical care. In addition, one physician group raised concerns about managing behavioral health services separately from prescription drugs because of the potential for conflicting utilization management policies to create barriers to care. For example, a pharmacy benefits manager may require outpatient counseling as a condition for receiving medication-assisted treatment for substance use, but such counseling may not be covered by the managed care company that authorizes behavioral health services. The four states we spoke with that explicitly manage behavioral health care separately—Connecticut, Michigan, Maryland, and West Virginia—noted that they were engaged in care coordination efforts. Connecticut officials said that although they have multiple contracts for benefits administration, all claims are processed through a single vendor and the state uses these data to help identify individuals in need of care management. Michigan officials said that the state has implemented claims sharing between the MCOs managing physical health care and the limited benefit plans that manage behavioral health benefits. Michigan is currently working on a demonstration program with CMS that would allow for real-time sharing of clinical information for individuals dually eligible for Medicare and Medicaid. Maryland included financial incentives related to physical health, such as the number of patients who have an annual primary care visit, in the contract with its behavioral health benefits manager. West Virginia officials said that they were working on creating a comprehensive managed care plan for newly eligible Medicaid enrollees that would offer both physical and behavioral benefits, including prescription drugs, under the same plan in order to better coordinate care. In addition, Michigan, Maryland, and West Virginia have established Medicaid health homes to coordinate care for individuals with chronic As of January 2015, conditions, including behavioral health conditions.Connecticut was in the process of developing Medicaid health homes for individuals with behavioral health conditions. Five of the six expansion states included in our study chose to align their alternative benefit plans with their Medicaid state plans—providing at least the same benefits for newly eligible enrollees as existing enrollees received under the state plan—and some states made alignment-related coverage changes. Connecticut, Kentucky, Maryland, Michigan, and Nevada aligned their alternative benefit plans with their Medicaid state plans, which required these states to add to their alternative benefit plans any state plan benefits that were not already included. For example, Michigan officials said they added additional recovery-oriented substance use services, such as peer support services, to the alternative benefit plan to match existing state plan benefits. Although not required, states may also choose to add benefits to their Medicaid state plans to match their alternative benefit plans. As part of the alignment process, Kentucky chose to extend substance use treatment—previously limited to children under 21 and pregnant and postpartum women—to all Medicaid enrollees under its state plan to match the substance use coverage in its alternative benefit plan. West Virginia did not align its alternative benefit plan with its Medicaid state plan, but there were no differences in coverage for behavioral health services and associated prescription drugs. Officials we interviewed from the six expansion states generally reported that Medicaid expansion had resulted in greater availability of behavioral health treatment, and changes were greater in states without previous coverage options for low-income adults. Kentucky, Nevada, and West Virginia did not have any coverage available for low-income childless adults prior to expansion and primarily relied on their states’ BHAs to provide behavioral health treatment for the uninsured. Kentucky officials reported a substantial increase in the availability of behavioral health treatment for individuals when they enrolled in Medicaid, as individuals were no longer limited to what state-funded community mental health centers could provide, and could access additional services, such as peer support services. Nevada officials stated that while the state BHA and the state’s Medicaid program provide the same array of behavioral health treatments, some uninsured individuals experienced long delays in receiving care prior to enrolling in Medicaid coverage under the expansion. West Virginia officials cited the increased availability of prescription drugs. West Virginia’s BHA did not pay for prescription drugs for uninsured individuals except in limited circumstances, whereas newly eligible Medicaid enrollees gained access to the full array of covered drugs under the state’s Medicaid program. In contrast, Connecticut, Maryland, and Michigan all had limited coverage available for certain low-income adults prior to expanding Medicaid that paid for some behavioral health services and prescription drugs. For example, Maryland’s Primary Adult Care program paid for outpatient mental health and substance use services and prescription drugs for adults up to 116 percent of the FPL. Officials from these three states reported that while the availability of treatment increased when individuals enrolled in Medicaid, the changes were small; for example, officials from two states reported that Medicaid beneficiaries had a greater choice of providers.experienced larger changes; for example, Michigan officials reported that enrollment in Medicaid had resulted in improved access to substance use services, including access to case management, which officials said could help individuals live more successfully in the community. Individuals not enrolled in these coverage programs Officials from the expansion states in our study did report some access concerns for new Medicaid enrollees due to behavioral health professional shortages, which they attempted to address in a variety of ways. Officials from all six states cited behavioral health workforce shortages as a challenge to providing behavioral health treatment for low- income adults in their states. The state officials specifically highlighted shortages of Medicaid-participating psychiatrists and psychiatric drug prescribers. Nevada officials reported conducting a secret shopper study of psychiatrists in the state’s Medicaid program in 2014 that found only 22 percent of Medicaid-enrolled psychiatrists were accepting new Medicaid patients. Maryland and Connecticut officials reported difficulties providing Medicaid enrollees with access to certain prescription drugs used for medication-assisted treatment for substance use conditions due to a lack of physicians willing to prescribe these drugs for Medicaid enrollees. States reported taking several steps to address workforce shortages, such as providing reimbursement for telehealth services, expanding the types of providers who can receive reimbursement for providing services in Medicaid, and using peers and other non-licensed providers to deliver some services under the supervision of licensed providers. Michigan chose to address behavioral health needs of its new Medicaid enrollees by leveraging its primary care workforce. The state used a health assessment tool as part of the enrollment process for its alternative benefit plan that included questions about potential behavioral health conditions. Health assessment information was conveyed to each enrollee’s primary care provider, who could then address any behavioral health needs or refer for specialty care if needed. State officials reported additional concerns regarding access to behavioral health treatment due to expansion-related budget reductions for state BHAs, which fund treatment for uninsured individuals, as well as non-Medicaid covered treatments for Medicaid enrollees. Officials from four of the six expansion states we spoke with—Connecticut, Kentucky, Michigan, and Nevada—reported that their state’s BHA budget had been reduced based on the expectation that uninsured individuals would enroll in Medicaid. For example, Michigan officials reported that the state reduced its state general fund contribution for its BHA by about 10 percent ($116 million) from fiscal year 2013 to fiscal year 2015, and Nevada reported a $33 million reduction to its BHA budget over fiscal years 2014 and 2015 related to the expansion. Some state officials raised concerns about having enough state BHA funding for individuals who would remain uninsured or underinsured following expansion, including individuals who are eligible but do not enroll or re-enroll in Medicaid, immigrants, and certain individuals under 65 who are enrolled in Medicare because of a disability. Officials from two states also expressed concerns about the adequacy of funding for wraparound services—services that are not covered by their states’ Medicaid programs, such as supportive housing—for Medicaid enrollees. Officials from the four states that reported BHA budget reductions noted that there were subsequent adjustments to their budgets to lessen the impact of the reductions based on these concerns. For example, Michigan’s BHA received an additional $25 million for fiscal year 2015 to address behavioral health needs in certain populations that remain ineligible for Medicaid. (See appendix II for more information on expansion-related changes in state BHA budgets.) Despite concerns about budget reductions, officials from two states noted that when additional Medicaid funds from the expansion were considered as part of the behavioral health budget, much more funding was available overall. Other continuing access problems mentioned by state officials related to inpatient behavioral health treatment. Nevada officials said that lack of psychiatric inpatient capacity has led to patients who were considered a risk to themselves or others being kept in emergency rooms for up to several days before they could secure a bed in a psychiatric hospital. Officials said that an average of 90 to 110 patients per day, predominately Medicaid enrollees, were waiting in emergency rooms. Nevada has made efforts to address the problem, for example, by sending teams of psychiatrists to emergency rooms to assess psychiatric patients to determine whether they could be discharged and treated on an outpatient basis. However, officials noted that discharging such patients carries risks and has led to poor outcomes in the past. Kentucky officials said that they were working to expand capacity for residential treatment programs for substance use. Officials said that given Medicaid’s exclusion of payment for treatment for adults at “institutions for mental disease” with 16 or more beds, they were encouraging providers to design any new residential substance use programs to be under that limit. However, they noted that doing so can prevent providers from taking advantage of economies of scale and may make it more difficult to operate some residential treatment programs shown to be effective for substance use conditions. Officials said that the state was working to develop alternatives to inpatient care for Medicaid enrollees, such as transitional housing combined with an intensive outpatient program. We provided a draft of this report to the Department of Health and Human Services for review. HHS provided technical comments, which we incorporated as appropriate. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after its issuance date. At that time, we will send copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. Estimated number of low-income, uninsured adults Percent (standard error) Number Total 20.3 (2.6) 68,495 337,414 24.9 (4.6) 20.9 (2.7) 19.4 (2.2) 12.5 (1.2) 12.0 (2.0) 13.0 (3.6) 15.7 (1.2) 12.2 (1.8) 21.1 (4.5) 23.7 (3.6) 16.5 (1.4) 27.5 (3.1) 19.2 (3.1) 15.9 (3.3) 15.8 (2.3) 20.3 (2.4) 19.9 (3.8) 21.2 (1.6) 19.5 (4.4) 16.4 (1.7) 17.2 (2.4) 21.0 (3.2) 20.6 (3.3) 21.5 (3.3) 20.4 (4.2) 6.9 (2.3) 13.2 (2.5) Estimated number of low-income, uninsured adults Percent (standard error) Number Total 12.6 (1.6) 85,555 679,004 14.4 (2.1) 27.5 (4.1) 27.4 (2.1) 18.1 (2.7) 20.7 (3.3) 18.9 (2.0) 24.9 (4.0) 22.3 (2.7) 17.5 (3.4) 22.0 (2.5) 13.7 (1.0) 16.6 (3.2) 23.4 (4.6) 23.4 (3.8) 18.1 (2.3) 21.4 (3.8) 19.6 (3.4) 16.7 (0.4) 16.9 (0.5) 16.6 (0.5) State expanded Medicaid as of February 2015 error) 9.8 (2.3) error) 11.7 (1.9) Percent (standard error) 1.2 (0.5) 3,251 10.1 (3.5) 21.9 (4.5) 6.3 (1.7) 17.0 (2.3) 2.4 (1.1) 9.8 (1.8) 13.1 (1.9) 3.5 (1.4) 4.0 (0.6) 9.7 (1.0) 1.2 (0.3) 2.8 (1.0) 9.7 (1.9) 0.5 (0.3) 7.4 (2.8) 7.2 (2.4) 6.0 (0.8) 11.9 (1.1) 2.2 (0.5) 3.6 (1.2) 9.4 (1.8) 0.8 (0.5) 6.4 (2.1) 18.1 (4.3) 3.3 (1.7) 11,231 11.3 (2.1) 15.8 (3.3) 3.4 (1.4) 3.9 (0.7) 13.9 (1.3) 1.3 (0.3) 50,730 13.0 (2.7) 19.9 (2.8) 5.4 (1.5) 6.4 (2.3) 13.4 (2.5) 0.7 (0.3) 3.3 (1.3) 12.8 (2.8) 0.2 (0.2) 6.1 (1.5) 13.4 (2.0) 3.7 (1.2) 6.2 (1.5) 16.3 (2.2) 2.2 (0.9) 8.8 (3.0) 6.1 (2.9) 16.0 (3.6) 4.9 (2.2) 1.6 (1.2) 16.0 (1.3) 2.0 (0.5) 11.7 (2.6) 2.1 (1.0) 7.3 (1.0) 7.2 (1.5) 11.1 (1.6) 1.9 (0.7) 6.1 (1.6) 12.9 (2.2) 1.8 (0.7) 6.5 (1.9) 16.3 (2.9) 1.9 (0.8) 5.5 (1.6) 15.8 (3.2) 0.6 (0.3) State expanded Medicaid as of February 2015 error) 6.9 (2.8) error) 16.1 (2.8) Percent (standard error) 1.5 (0.6) 5.2 (1.8) 17.9 (4.0) 2.7 (1.5) 1.3 (0.8) 5.9 (2.1) 0.4 (0.3) 6.1 (1.7) 10.7 (2.1) 3.6 (1.2) 3.1 (0.9) 10.4 (1.5) 0.9 (0.4) 3.1 (1.1) 13.1 (1.9) 1.8 (0.6) 2,688 10.5 (3.1) 19.7 (3.8) 57,470 11.1 (1.5) 20.5 (1.7) 4.2 (0.9) 7.5 (1.7) 12.4 (2.2) 1.7 (0.8) 6.8 (1.9) 15.9 (2.9) 2.0 (0.9) 6.2 (1.3) 15.4 (1.9) 2.7 (0.9) 6.9 (2.5) 23.5 (3.8) 5.5 (2.3) 33,594 10.4 (2.4) 16.0 (2.5) 4.1 (2.0) 5.6 (1.9) 14.5 (3.1) 2.6 (1.3) 8.5 (1.8) 15.9 (2.5) 2.4 (1.0) 4.2 (0.6) 10.5 (0.8) 1.0 (0.3) 7.4 (2.0) 10.6 (2.3) 1.4 (0.7) 1,041 10.2 (3.2) 15.7 (3.9) 2.5 (1.2) 7.3 (2.4) 16.9 (4.2) 0.8 (0.4) 36,787 10.1 (3.0) 18.0 (3.2) 4.7 (1.8) 9.1 (2.1) 12.7 (1.9) 3.6 (1.4) 22,092 11.3 (3.8) 15.4 (3.7) 7.9 (2.4) 12.4 (2.3) 0.7 (0.3) 17,750,507 1,029,529 523,286 9,022,176 506,243 8,728,331 5.8 (0.2) 2,272,065 5.8 (0.3) 1,190,927 5.8 (0.3) 1,082,313 12.8 (0.3) 13.2 (0.5) 12.4 (0.5) 1.9 (0.1) 2.0 (0.2) 1.7 (0.2) Not reported because percentage was estimated with low precision. However, totals include data for states that are not reported in this table. For the purposes of this report, we refer to the District of Columbia as a state. Totals include data for states that are not reported in this table. Numbers of low-income uninsured adults in expansion and non-expansion states with a given type of behavioral health condition may not sum to the total because the percentages used to calculate these numbers were rounded. Connecticut officials reported that the budget for its state behavioral health agency (BHA) was reduced in fiscal years 2014 and 2015. However, amid concerns about the effects on providers, the BHA absorbed some of these reductions. In fiscal year 2014, the BHA’s budget was reduced by $15.2 million, but the agency absorbed this reduction rather than decreasing the amount of grant funding for providers for the treatment of uninsured and underinsured individuals. In fiscal year 2015, there was a $25.5 million reduction in the BHA’s budget. The BHA used a one-time $10 million appropriation from the Connecticut legislature plus other sources to limit the reduction in grant funding for providers to $5.4 million. Kentucky officials reported that the BHA’s budget was reduced by $9 million in fiscal year 2014. However, one-time funds allowed the BHA to avoid reducing funding for its contracts with community mental health centers. In fiscal year 2015, there was a $21 million decrease in the BHA’s budget, which was taken from contracts with community mental health centers. Maryland officials reported that the state has not reduced state general fund support for its BHA due to the expansion of Medicaid. Michigan officials reported that the state reduced the budget for its state BHA due to the Medicaid expansion, but then added some funds based on concerns about certain populations that remain ineligible for Medicaid. Michigan officials reported that state general revenue contributions to its BHA were reduced by $116 million from fiscal year 2013 to fiscal year 2015 (from $1.153 billion in fiscal year 2013 to $1.037 billion for fiscal year 2015). Michigan officials reported that their legislature had appropriated an additional $25 million for fiscal year 2015 to address the needs of individuals ineligible for Medicaid, such as individuals younger than 64 enrolled in Medicare based on a disability and commercially insured children. Nevada officials reported that the state reduced the budget for its BHA, but used one-time funds to ease the transition from state funding to Medicaid reimbursement for substance use providers in fiscal year 2014. Nevada officials reported a reduction of $33 million in the BHA’s budget in fiscal years 2014 and 2015. Nevada officials said one-time funds totaling about $690,000 were used in fiscal year 2014 for substance use service providers to maintain services during the transition from state funding to Medicaid reimbursement. West Virginia officials reported that its charity care fund, which reimburses its network of comprehensive community behavioral health centers for the care of the uninsured, was funded at about $15.4 million per year in fiscal years 2013, 2014, and 2015. Officials said that the governor’s fiscal year 2016 budget had recommended a $3 million reduction in the charity care fund due to Medicaid expansion. In addition to the contact named above, William Black, Assistant Director; Manuel Buentello; Hannah Locke; Drew Long; Hannah Marston Minter; and Emily Wilson made key contributions to this report.
Research has shown that low-income individuals disproportionately experience behavioral health conditions and may have difficulty accessing care. Expansions of Medicaid under PPACA raise questions about states' capacity to manage the increased demand for treatment. Additional questions arise about treatment options for low-income adults in non-expansion states. GAO was asked to provide information about access to behavioral health treatment for low-income, uninsured, and Medicaid-enrolled adults. This report examines (1) how many low-income, uninsured adults may have a behavioral health condition; (2) options for low-income, uninsured adults to receive behavioral health treatment in selected non-expansion states; and (3) how selected Medicaid expansion states provide behavioral health coverage for newly eligible enrollees, and how enrollment in coverage affects treatment availability. GAO obtained estimates of low-income adults who may have a behavioral health condition from the Substance Abuse and Mental Health Services Administration. GAO also selected four non-expansion and six expansion states based on, among other criteria, geographic region and adult Medicaid enrollment. GAO reviewed documents from all selected states, and interviewed state Medicaid and BHA officials to understand how uninsured and Medicaid-enrolled adults receive behavioral health treatment. The Department of Health and Human Services provided technical comments on a draft of this report, which GAO incorporated as appropriate. Nationwide, estimates using 2008-2013 data indicated that approximately 17 percent of low-income, uninsured adults (3 million) had a behavioral health condition, defined as a serious mental illness, a substance use condition, or both. Underlying these national estimates is considerable variation at the state level. The estimated number of low-income, uninsured adults with behavioral health conditions was divided evenly between states that did and did not subsequently expand Medicaid under the Patient Protection and Affordable Care Act (PPACA). Behavioral health agencies (BHA) in four selected non-expansion states offered various treatment options for low-income, uninsured adults, focusing care primarily on those with the most serious behavioral health needs. To do so, BHAs in all four selected states established priority populations of those with the most serious behavioral health needs. Also, BHAs in three of the four states maintained waiting lists for adults with less serious behavioral health needs. Six selected states that expanded Medicaid generally managed behavioral health and physical health benefits separately for newly eligible enrollees, and state officials reported increased availability of behavioral health treatment, although some access concerns continue. Four of the six selected states explicitly chose separate contractual arrangements for behavioral health and physical benefits. Officials from all six selected states said that enrollment in Medicaid increased the availability of behavioral health treatment for newly eligible enrollees. Officials also reported some ongoing access concerns, such as workforce shortages.
Internal control is not one event, but a series of activities that occur throughout an entity’s operations and on an ongoing basis. Internal control should be an integral part of each system that management uses to regulate and guide its operations rather than as a separate system within an agency. In this sense, internal control is management control that is built into the entity as a part of its infrastructure to help managers run the entity and achieve their goals on an ongoing basis. Section 3512 (c), (d) of Title 31, U.S. Code, commonly known as the Federal Managers’ Financial Integrity Act of 1982 (FMFIA), requires agencies to establish and maintain effective internal control. The agency head must annually evaluate and report on the control and financial systems that protect the integrity of its federal programs. The requirements of FMFIA serve as an umbrella under which other reviews, evaluations, and audits should be coordinated and considered to support management’s assertion about the effectiveness of internal control over operations, financial reporting, and compliance with laws and regulations. Office of Management and Budget (OMB) Circular No. A-123, Management’s Responsibility for Internal Control, provides the implementing guidance for FMFIA, and prescribes the specific requirements for assessing and reporting on internal controls consistent with the Standards for Internal Control in the Federal Government (internal control standards) issued by the Comptroller General of the United States. The circular defines management’s responsibilities related to internal control and the process for assessing internal control effectiveness, and provides specific requirements for conducting management’s assessment of the effectiveness of internal control over financial reporting. Specifically, the circular requires management to annually provide assurances on internal control in its performance and accountability report, and, for each of the 24 Chief Financial Officers (CFO) Act agencies, to include a separate assurance on internal control over financial reporting, along with a report on identified material weaknesses and corrective actions. The circular also emphasizes the need for integrated and coordinated internal control assessments that synchronize all internal control–related activities. FMFIA requires GAO to issue standards for internal control in the federal government. The internal control standards provide the overall framework for establishing and maintaining effective internal control and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. As summarized in the internal control standards, internal control in the government is defined by the following five elements, which also provide the basis against which internal controls are to be evaluated: Control environment: Management and employees should establish and maintain an environment throughout the organization that sets a positive and supportive attitude toward internal control and conscientious management. Risk assessment: Internal control should provide for an assessment of the risks the agency faces from both external and internal sources. Control activities: Internal control activities help ensure that management’s directives are carried out. The control activities should be effective and efficient in accomplishing the agency’s control objectives. Information and communication: Information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their internal control and other responsibilities. Monitoring: Internal control monitoring should assess the quality of performance over time and ensure that the findings of audits and other reviews are promptly resolved. A key objective in our annual audits of IRS’s financial statements is to obtain reasonable assurance that IRS maintained effective internal control with respect to financial reporting. While we use all five elements of internal control as a basis for evaluating the effectiveness of IRS’s internal controls, our ongoing evaluations and tests have focused heavily on control activities, where we have identified numerous internal control weaknesses and have provided recommendations for corrective action. Control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives. In other words, they are the activities conducted in the everyday course of business that are intended to accomplish a control objective, such as ensuring IRS employees successfully complete background checks prior to being granted access to taxpayer information and receipts. As such, control activities are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achievement of effective results. To accomplish our objectives, we evaluated the effectiveness of corrective actions IRS implemented during fiscal year 2009 in response to open recommendations as part of our fiscal years 2009 and 2008 financial audits. To determine the current status of the recommendations, we (1) obtained IRS’s reported status of each recommendation and corrective action taken or planned as of April 2010, (2) compared IRS’s reported status to our fiscal year 2009 audit findings to identify any differences between IRS’s and our conclusions regarding the status of each recommendation, and (3) performed additional follow-up work to assess IRS’s actions taken to address the open recommendations. For our recommendations to IRS regarding information security, this report includes only summary data on the number of those recommendations and their general nature. We have reported the objectives and results of our information security work separately to IRS because of the sensitive nature of many of the issues identified for which we have made recommendations for corrective action. In order to determine how IRS’s open recommendations, including the latest ones in our June 2010 management report, fit within the agency’s management and internal control structure, we compared the open recommendations and the issues that gave rise to them to the (1) control activities listed in the internal control standards, (2) list of major factors and examples outlined in our Internal Control Management and Evaluation Tool, and (3) criteria and objectives for federal financial management as discussed in the CFO Act of 1990 and the Federal Accounting Standards Advisory Board’s (FASAB) Statement of Federal Financial Accounting Concepts No. 1, Objectives of Federal Financial Reporting. We also considered whether IRS had addressed, in whole or in part, the underlying control issues that gave rise to the recommendations; and other legal requirements and implementing guidance, such as OMB Circular No. A-123 and FMFIA. Our work was performed from December 2009 through May 2010 in accordance with generally accepted government auditing standards. IRS continues to make progress in resolving its internal control weaknesses and addressing outstanding recommendations, but it still faces significant financial management challenges. Since we first began auditing IRS’s financial statements in fiscal year 1992, IRS has taken a significant number of actions that enabled us to eliminate several material weaknesses and significant deficiencies and to close over 250 of our previously reported financial management–related recommendations. This includes 18 recommendations we are closing with this report based on actions IRS took through April 2010. Nevertheless, IRS continues to face significant challenges in improving the effectiveness of its financial and operational management. Specifically, IRS continues to face management challenges in (1) resolving its two remaining material weaknesses in internal control, (2) developing performance measures and managing for outcomes, and (3) addressing its remaining internal control issues, particularly those dealing with safeguarding of taxpayer receipts and information. Further, as in previous years’ audits, our fiscal year 2009 audit continued to identify additional internal control issues, resulting in 41 new recommendations for corrective action we discussed in detail in our June 2010 management report to IRS. In addition, as noted earlier, we also identified several issues related to information security during our fiscal year 2009 audit that we reported separately because of the sensitive nature of many of those issues. As we reported in our audit of IRS’s fiscal year 2009 financial statements, IRS’s efforts to address its internal control weaknesses resulted in our closure of a material weakness in internal control over financial reporting and a significant deficiency in internal control over tax revenue and refunds. However, as we also reported in that audit, IRS continues to face significant challenges in resolving its two remaining material weaknesses in internal control concerning (1) unpaid tax assessments and (2) information security. IRS’s continuing challenge in addressing its material weakness in internal control over unpaid tax assessments results from its (1) inability to use its core general ledger system for tax administration–related transactions to support its reported balances for taxes receivable and other unpaid assessments, (2) lack of a subsidiary ledger for unpaid tax assessments that would allow it to produce reliable, useful, and timely information with which to manage and report externally on these key transactions, and (3) errors and delays in recording taxpayer information, payments, and other activities. These control deficiencies impede IRS’s ability to properly manage and routinely report certain information on unpaid tax assessments and lead to increased taxpayer burden. IRS’s continuing challenge in addressing its material weakness in internal control over information security is primarily due to IRS not having fully implemented its information security program. As we reported in our audit of IRS’s fiscal year 2009 financial statements, IRS has not (1) restricted users’ ability to bypass application controls, (2) removed separated employees’ system access in a timely manner, (3) followed required procedures to timely review employee access to sensitive areas at data centers, (4) restricted system access to only those who needed it, (5) instituted adequate separation of duties for its procurement system, and (6) developed adequate encryption controls over user login. IRS’s deficiencies in internal control over information security result in IRS’s inability to rely on the controls embedded in its automated financial management systems to provide reasonable assurance that its (1) financial statements are fairly stated in accordance with U.S. generally accepted accounting principles, (2) financial information that management relies on to support day-to-day decision making is current, complete, and accurate, and (3) proprietary information processed by these automated systems is appropriately safeguarded. These deficiencies also increase the risk that unauthorized individuals could access, alter, or abuse proprietary IRS programs and electronic data and taxpayer information without detection. We have made numerous recommendations to IRS over the years— including new recommendations resulting from our fiscal year 2009 financial audit—to address the issues constituting these two material internal control weaknesses. Successfully implementing these recommendations would assist IRS in fully resolving these weaknesses. To its credit, IRS continues to work to address the issues underlying these two material weaknesses. As we reported in our audit of IRS’s fiscal year 2009 financial statements, IRS continues to face significant challenges in developing and instutionalizing the use of financial management information to assist it in making operational decisions and in measuring the effectiveness of its programs. IRS’s management has not developed the data or outcome-oriented performance measures that would enhance its ability to manage for outcomes. For example, it has not integrated the use of cost-based (and when appropriate, revenue-based) performance metrics into its routine management and decision-making processes or externally reported performance metrics. Although IRS has developed projected return on investment estimates for new enforcement (tax collection) initiatives in its annual budget submissions, it has not developed similar outcome-oriented performance metrics to determine whether funded initiatives achieve their estimated goals. IRS has also not developed outcome-oriented performance metrics for its existing enforcement programs. These limitations inhibit IRS’s ability to more fully assess and monitor the relative merits of its existing programs, to evaluate new initiatives, or to consider alternatives and adjust its strategies as needed. Outcome-oriented performance metrics based on specific enforcement programs’ costs and revenues should improve IRS’s ability to (1) establish measurable outcome goals, (2) evaluate the relative merits of various program options, and (3) highlight opportunities for optimizing the allocation of resources. They can also help IRS more credibly demonstrate to Congress and the public that it is spending its appropriations wisely. IRS’s existing metrics focus on process-oriented workload measures of program outputs rather than on measuring program outcomes. For example, for its enforcement programs, IRS focuses on measuring discrete activities within its overall tax collection efforts, such as the percentage of various types of tax returns examined, criminal investigations completed, and the number of tax returns examined and closed. While such output measures can be useful elements in assessing performance, they are not designed to measure the contribution each of these activities makes to the collection of unpaid taxes, nor do they compare the cost of collection activities to the tax revenue generated. IRS’s enforcement metrics do not include revenue collected—a measure of outcome—compared to the cost of collection that could show the net monetary benefits of the enforcement programs. In addition, IRS’s publicly available performance metrics do not measure the cost of IRS’s programs either in the aggregate or per service or activity performed. As we report in the “Status per IRS” section of appendix I in this report, IRS has reported that it considers our recommendation to develop outcome-oriented performance measures and related performance goals for IRS’s enforcement programs and activities to be closed. We do not agree. Part of IRS’s justification for closing the recommendation is that IRS uses cost-benefit return on investment analysis to evaluate future scenarios and to support funding requests for new initiatives in its annual budget submissions. Such prospective return on investment information is useful for budgetary decision making, but our recommendation is for IRS to develop outcome data on the actual results of its programs and activities. We have also previously recommended that IRS (1) extend the use of return on investment in future budget proposals to include major enforcement programs and (2) develop return on investment data for its enforcement programs using actual revenue and full cost data and compare actual results to the projected return on investment data included in its budget request. Our recommendations regarding development of outcome-oriented performance metrics remain open because, as noted above, IRS does not develop such data for either funded initiatives or for ongoing enforcement programs and activities and it has not deployed outcome-oriented performance measures. IRS also reported that return on investment information is but one tool that can be utilized to improve resource-allocation decision making, and it is not prudent to rely exclusively on return on investment as the sole determinant of resource allocation. As we have reported previously, we acknowledge that IRS must consider other factors besides maximizing revenue collection and least-cost operations. The fairness of IRS’s implementation of the tax code and treatment of all taxpayers are important, and we are cognizant of the many factors, such as coverage, that are important considerations when making resource-allocation decisions. These factors, and the decisions IRS makes about how to respond to them, have a significant effect on taxpayers, as well as on tax collections. However, using full cost and collection outcome-oriented performance metrics are also important to make optimum use of its available resources and to be able to credibly demonstrate it is doing so to Congress and the public. For several years, IRS has been developing full cost data on its programs and activities in response to a recommendation we made in 1999. However, as we have reported in the past, IRS’s efforts have been slowed because IRS cannot produce full cost information down to the program and activity levels directly from its cost accounting system, the Integrated Financial System (IFS). IRS has partially overcome this difficulty by developing the ability to manually combine cost data from IFS with personnel time-charge data from IRS’s various workload management systems and revenue data for enforcement programs to develop full cost (and revenue) information for selected programs. IRS’s lack of outcome-oriented performance metrics is inconsistent with federal financial management concepts as embodied in FASAB’s Statement of Federal Financial Accounting Concepts No. 1, Objectives of Federal Financial Reporting. In its discussion of financial reporting concepts, FASAB notes that federal financial data should provide accountability and decision-useful information on the costs of programs and the outputs and outcomes achieved, and it should provide data for evaluating service efforts, costs, and accomplishments. The absence of outcome metrics is also inconsistent with the objectives of the CFO Act of 1990. A key objective of the act was for agencies to routinely develop and use appropriate financial management information to evaluate program effectiveness, make fully informed operational decisions, and ensure accountability. While obtaining a clean audit opinion on its financial statements is important in itself, it is not the end goal reflected in the act. The end goal is modern financial management systems that provide reliable, timely, and useful financial information to support day-to-day decision making and oversight. Such systems and practices should also provide for the systematic measurement of both outputs and outcomes. Developing the data and performance metrics necessary for a more outcome-oriented approach to managing operations requires active and sustained senior management leadership. We acknowledge that without the benefit of integrated financial management systems, IRS faces significant challenges in developing outcome-oriented performance metrics, including the data needed for such metrics. However, undertaking such an effort agencywide will enhance IRS’s ability to effectively measure and compare the benefits of its programs to make better informed resource-allocation decisions and to better support its budget requests. We have made several recommendations to IRS over the years to address its financial management challenges in developing full cost data for its programs and activities and for outcome-oriented performance measures. Successfully addressing the remaining open recommendations would enhance IRS’s ability to effectively manage for outcomes. As discussed earlier, IRS has taken significant actions over the years to resolve internal control weaknesses and this has enabled us to close over 250 internal control–related recommendations. The closure of such a high number of recommendations indicates that IRS has a strong commitment to improving its internal control. However, IRS also continues to face a challenge in addressing numerous other unresolved internal control issues in several aspects of its operations that, while neither individually nor collectively representing a material weakness, nonetheless merit management attention to ensure they are fully and effectively addressed. IRS now has a total of 70 open audit recommendations resulting from internal control issues that we report as “other control issues” in appendix II of this report. While most were identified during our recent financial audits, some were identified in our audits as far back as 1999 and 2001. Over half of those 70 open recommendations address issues related to the physical safeguarding of tax receipts and taxpayer information, a critical aspect of IRS’s responsibilities. IRS processes billions of dollars annually in checks and currency and other valuable assets, and it must physically safeguard and account for them to prevent theft, fraud, and misuse. To do so, IRS has established physical security, accountability, and accounting policies, processes, and procedures to manage its activities involving the transportation and accounting for tax receipts and for handling and storing taxpayer information. Although IRS has made substantial improvements in safeguarding taxpayer receipts and information since our financial audits first began surfacing serious internal control issues in this area, the task of ensuring ongoing control over such critical responsibilities for IRS is a difficult one. Each year, we continue to identify control issues related to IRS’s safeguarding of taxpayer receipts and information. For example, based on our fiscal year 2009 audit, we identified new internal control issues and made 19 additional recommendations that related either directly or indirectly to the physical safeguarding of taxpayer receipts and information. The internal control issues encompassed in our recommendations cover critical physical security functions, such as transporting taxpayer receipts and sensitive taxpayer information among IRS facilities and lockbox banks and maintaining physical security at IRS facilities to prevent loss, theft, or the potential for fraud regarding tax receipts and taxpayer information; conducting inspections and audits of the design and operation of IRS’s physical security processes and controls designed to safeguard tax receipts and taxpayer information; conducting appropriate background investigations and screening of personnel, including contractors, with access to IRS facilities and lockbox bank operations; and ensuring the proper destruction of documents to prevent the inappropriate release of sensitive taxpayer information. Due to the volume of taxpayer receipts and sensitive taxpayer files that IRS is responsible for safeguarding, and the implications for IRS’s mission if they are lost, stolen, or the subject of fraud or misuse, it is critical that IRS successfully resolve the internal control issues we have identified and work toward continually improving its internal controls to prevent new issues from arising. In June 2009, we issued a report on the status of IRS’s efforts to implement corrective actions to address financial management recommendations stemming from our fiscal year 2008 and prior year financial audits and other financial management–related work. In that report, we identified 62 audit recommendations that remained open and thus required corrective action by IRS. A significant number of these recommendations had been open for several years, either because IRS had not taken corrective action or because the actions taken had not yet effectively resolved the issues that gave rise to the recommendations. IRS continued to work to address many of the internal control issues to which these open recommendations relate. In the course of performing our fiscal year 2009 financial audit, we identified numerous actions IRS took to address many of its internal control issues. On the basis of IRS’s actions, which we were able to substantiate through our audit, we have closed 18 of these prior years’ recommendations. However, a total of 44 recommendations from prior years remain open, a significant number of which have been outstanding for several years. IRS considers another 21 of the prior years’ recommendations to be effectively addressed and therefore closed. However, we consider them to remain open. For 14 of the 21, in our view, IRS’s actions did not fully address the issue that gave rise to the recommendations. For the remaining seven, we have not yet been able to verify the effectiveness of IRS’s actions. (See app. I, “Status per GAO,” for our assessment of IRS’s actions on each recommendation). During our audit of IRS’s fiscal year 2009 financial statements, we identified additional issues that require corrective action. In our June 2010 management report to IRS, we discussed these issues, and made 41 new recommendations to address them. Consequently, a total of 85 financial management–related recommendations need to be addressed—44 from prior years and 41 new ones from our fiscal year 2009 audit. We consider all of the new recommendations to be short-term. We also consider the majority of the recommendations outstanding from prior years to be short-term; however, a few, particularly those concerning the functionality of IRS’s automated systems, are complex and will require several more years to fully and effectively address. In addition to the 85 open recommendations from our financial audits and other financial management–related work, there are 88 additional open recommendations stemming from our assessment of IRS’s information security controls over key financial systems, information, and interconnected networks conducted as an integral part of our annual financial audits. The issues that led to our previously reported and our newly identified recommendations related to information security increase the risk of unauthorized disclosure, modification, or destruction of financial and sensitive taxpayer data. Collectively, they constitute IRS’s material weakness in internal control over information security for its financial and tax processing systems. As discussed earlier in this report, recommendations resulting from the information security issues identified in our annual audits of IRS’s financial statements are reported separately because of the sensitive nature of many of these issues. Appendix I presents a combined listing of (1) the 62 non-information- systems security–related recommendations based on our financial statement audits and other financial management–related work that we had not previously reported as closed and the 41 new recommendations based on our fiscal year 2009 financial audit, (2) IRS-reported corrective actions taken or planned as of April 2010, and (3) our analysis of whether the issues that gave rise to the recommendations have been effectively addressed, based primarily on the work performed during our fiscal year 2009 financial statement audit. The appendix lists the recommendations by the date on which the recommendation was made and by report number. Appendix II presents the open recommendations arranged by related material weakness and compliance issue as described in our opinion report on IRS’s financial statements, as well as other control issues we have identified and discussed in our annual management reports to IRS. Linking the open recommendations from our financial audits and other financial management–related work, and the issues that gave rise to them, to internal control activities that are central to IRS’s tax administration responsibilities provides insight regarding their significance. Internal control standards consist of five elements—control environment, risk assessment, control activities, information and communication, and monitoring. For the control activities element, the internal control standards explain that an agency’s system of internal control should provide for an assessment of the risks the agency faces from both external and internal sources and that internal control activities help ensure that management’s directives are carried out. The control activities should be effective and efficient in accomplishing the agency’s control objectives. The control activities element defines 11 specific control activities, which we have grouped into three categories, as shown in table 1. Each of the unresolved recommendations from our financial audits and financial management–related work, and the underlying issues that gave rise to them, can be traced to 1 of the 11 specific control activities as shown in table 1. As table 1 indicates, 19 (22 percent) of the unresolved recommendations relate to IRS’s controls over safeguarding of assets and security activities, 39 (46 percent) relate to issues associated with IRS’s ability to properly record and document transactions, and 27 (32 percent) relate to issues associated with IRS’s management review and oversight. On the following pages, we group the 85 open recommendations under the specific control activity to which the condition that gave rise to them most appropriately fits. We define each control activity as presented in the internal control standards and briefly identify some of the key IRS operations that fall under that control activity. Although not comprehensive, the descriptions are intended to help explain why actions to strengthen these control activities are important for IRS to efficiently and effectively carry out its overall mission. Each control activity description includes a table of the related open recommendations. The tables list the recommendations by the year in which we made them (ID no.). For each recommendation, we also indicate whether it is a short- term or long-term recommendation. We characterized a recommendation as short-term when we believe that IRS had the capability to implement solutions within 2 years of the year in which we first reported them. Given IRS’s mission, the sensitivity of the data it maintains, and its processing of trillions of dollars of tax receipts each year, one of the most important control activities at IRS is the safeguarding of assets. Internal control in this important area should be designed to provide reasonable assurance regarding prevention or prompt detection of unauthorized acquisition, use, or disposition of an agency’s assets. IRS has outstanding recommendations in the following three control activities in the internal control standards that relate to safeguarding of assets (including buildings and equipment as well as tax receipts) and security activities (such as limiting access to only authorized personnel): (1) physical control over vulnerable assets; (2) segregation of duties; and (3) access restrictions to, and accountability for, resources and records. Internal control standard: An agency must establish physical control to secure and safeguard vulnerable assets. Examples include security for and limited access to assets such as cash, securities, inventories, and equipment which might be vulnerable to risk of loss or unauthorized use. Such assets should be periodically counted and compared to control records. Of the trillions of dollars in taxes that IRS collects each year, hundreds of billions is collected in the form of checks and cash accompanied by tax returns and related information. IRS collects taxes both at its own facilities as well as at lockbox banks. IRS acts as custodian for (1) the tax payments it receives until they are deposited in the General Fund of the U.S. Treasury and (2) the tax returns and related information it receives until they are either sent to the Federal Records Center or destroyed. IRS is also charged with controlling many other assets, such as computers and other equipment, but it is IRS’s legal responsibility to safeguard tax returns and the confidential information taxpayers provided on those returns that makes the effectiveness of IRS’s internal controls over physical security essential. While effective physical safeguards over receipts should exist throughout the year, such safeguards are especially important during the peak tax filing season. Each year during the weeks preceding and shortly after April 15, an IRS service center or lockbox bank may receive and process daily over 100,000 pieces of mail containing returns, receipts, or both. The dollar value of receipts each service center and lockbox bank processes increases to hundreds of millions of dollars a day during the April 15 time frame. The following 11 open recommendations in table 2 are designed to improve IRS’s physical controls over vulnerable assets. They include recommendations for IRS to improve controls over (1) physical security at its Taxpayer Assistance Centers (TAC), (2) courier activities, and (3) lockbox banks’ handling of unprocessable items. We consider all of these recommendations to be correctable on a short-term basis. Internal control standard: Key duties and responsibilities need to be divided or segregated among different people to reduce the risk of error or fraud. This should include separating the responsibilities for authorizing transactions, processing and recording them, reviewing the transactions, and handling any related assets. No one individual should control all key aspects of a transaction or event. As noted in the previous section, IRS employees process hundreds of billions of dollars in tax receipts in the form of cash and checks. Consequently, it is critical that IRS maintain appropriate separation of duties to allow for adequate oversight of staff and protection of these vulnerable resources so that no single individual would be in a position of causing an error or irregularity, or potentially converting the asset to personal use, and then concealing it. For example, when an IRS field office receives taxpayer receipts and returns, it is responsible for depositing the cash and checks in a depository institution and forwarding the related taxpayer information received, such as tax returns, to an IRS service center for further processing. In order to adequately safeguard receipts from theft, the person responsible for recording the information from the taxpayer receipts on a voucher should be different from the individual who prepares those receipts for transmittal to the service center for further processing. Also, IRS employees must properly account for the billions of dollars IRS spends each year on its operations. Implementing the following three recommendations in table 3 would help IRS improve its separation of duties, which will in turn strengthen its controls over tax receipts, procurement activities, and financial accounting processes. All are short-term in nature. Internal control standard: Access to resources and records should be limited to authorized individuals, and accountability for their custody and use should be assigned and maintained. Periodic comparison of resources with the recorded accountability should be made to help reduce the risk of errors, fraud, misuse, or unauthorized alteration. Because IRS handles, and is responsible for maintaining accountability over, a large volume of cash and checks, it is imperative that it maintain strong controls to appropriately restrict access to those assets, the records relied on to track those assets, and sensitive taxpayer information. Although IRS has a number of both physical and information systems controls in place, some of the issues we have identified in our financial audits over the years pertain to ensuring that (1) those individuals who have direct access to cash and checks are appropriately vetted, such as through appropriate background investigations, before being granted access to taxpayer receipts and information and (2) IRS maintains effective access security control. The following five short-term recommendations in table 4 were intended to help IRS improve its access restrictions to assets and records. Internal control standard: Internal control and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. The documentation should appear in management directives, administrative policies, or operating manuals and may be in paper or electronic form. All documentation and records should be properly managed and maintained. IRS collects and processes trillions of dollars in taxpayer receipts annually both at its own facilities and at lockbox banks under contract to process taxpayer receipts for the federal government. Therefore, it is important that IRS maintain effective controls to ensure that all documents and records are properly and timely recorded, managed, and maintained both at its facilities and at the lockbox banks. In this regard, it is critical that IRS adequately document and disseminate its procedures to ensure that they are available for IRS employees. IRS must also document its management reviews of controls, such as those regarding refunds and returned checks, credit card purchases, and reviews of TAC operations. To ensure future availability of adequate documentation, IRS must ensure that (1) its systems, particularly those now being developed and implemented, have appropriate capability to identify and trace individual transactions and (2) all critical steps in its accounting processes are adequately documented and controlled. Resolving the following 18 recommendations in table 5 would assist IRS in improving its documentation of transactions and related internal control procedures. Seventeen of these recommendations are short-term, and one is long-term. IRS maintains records for tens of millions of taxpayers in addition to maintaining its own financial records. To carry out this responsibility, IRS often has to rely on outdated computer systems or manual work-arounds. Unfortunately, some of IRS’s recordkeeping difficulties we have reported on over the years will not be addressed until it can replace its aging systems, an effort that is long-term and, in part, dependent on obtaining future funding. Implementation of the following 20 recommendations in table 6 would strengthen IRS’s recordkeeping abilities. Sixteen of these recommendations are short-term, and four are long-term regarding requirements for new systems for maintaining taxpayer records. Several of the recommendations listed deal with financial reporting processes, such as maintaining subsidiary records, recording budgetary transactions, and tracking program costs. Some of the issues that gave rise to several of our recommendations directly affect taxpayers, such as those involving duplicate assessments, errors in calculating and reporting manual interest, errors in calculating penalties, and collection of trust fund recovery penalty assessments. Three of these recommendations have remained open for over 10 years, reflecting the complex nature of the underlying systems issues that must be resolved to fully address some of these control deficiencies. Internal control standard: Transactions and other significant events should be authorized and executed only by persons acting within the scope of their authority. This is the principal means of ensuring that only valid transactions to exchange, transfer, use, or commit resources and other events are initiated or entered into. Authorizations should be clearly communicated to managers and employees. Each year, IRS spends approximately $250 million annually to cover the cost of its employees’ travel. Failure to ensure that employees obtain appropriate authorizations for their travel leaves the government open to fraud, waste, or abuse. IRS actions to address the following short-term recommendation in table 7 would improve IRS’s controls over travel costs. All personnel within IRS have an important role in establishing and maintaining effective internal controls, but IRS’s managers have additional review and oversight responsibilities. Management must set the objectives, put control activities in place, and monitor and evaluate controls to ensure that they are followed. Without adequate monitoring by managers, there is a risk that internal control activities may not be carried out effectively and in a timely manner. IRS has outstanding recommendations in the following four control activities related to effective management review and oversight: (1) reviews by management at the functional or activity level, (2) establishment and review of performance measures and indicators, (3) management of human capital, and (4) top-level reviews of actual performance. Internal control standard: Managers need to compare actual performance to planned or expected results throughout the organization and analyze significant differences. IRS employs over 100,000 full-time and seasonal employees. In addition, as discussed earlier, lockbox banks process tens of thousands of individual receipts, totaling hundreds of billions of dollars for IRS. Management oversight of operations is important at any organization, but is imperative at IRS given its mission. Implementing the following 14 short-term and 1 long-term recommendations in table 8 would improve IRS’s management oversight of several areas of its operations, including monitoring of contractor facilities, release of tax liens, issuance of manual refunds, and use of appropriated funds. These recommendations were made because an internal control activity either did not exist or the existing control was not being adequately or consistently applied. Internal control standard: Activities need to be established to monitor performance measures and indicators. These controls could call for comparisons and assessments relating different sets of data to one another so that analyses of the relationships can be made and appropriate actions taken. Controls should also be aimed at validating the propriety and integrity of both organizational and individual performance measures and indicators. IRS’s operations include a vast array of activities encompassing educating taxpayers, processing of taxpayer receipts and data, disbursing hundreds of billions of dollars in refunds to millions of taxpayers, maintaining extensive information on tens of millions of taxpayers, and seeking collection from individuals and businesses that fail to comply with the nation’s tax laws. Within its compliance function, IRS has numerous activities, including identifying businesses and individuals that underreport income, collecting from taxpayers who do not pay taxes, and collecting from those receiving refunds for which they are not entitled. Although IRS has at its peak over 100,000 employees, it still faces resource constraints in attempting to fulfill its duties. It is vitally important for IRS to have sound performance measures to assist it in assessing its performance and targeting its resources to maximize the government’s return on investment. However, in past audits we have reported that IRS did not capture costs at the program or activity level to assist in developing cost-based performance measures for its various programs and activities. As a result, IRS is unable to measure the costs and benefits of its various collection and enforcement efforts to best target its available resources. The following one short-term and two long-term recommendations in table 9 are designed to assist IRS in (1) evaluating its operations, (2) determining which activities are the most beneficial, and (3) establishing a good system for oversight. These recommendations are directed at improving IRS’s ability to measure, track, and evaluate the costs, benefits, or outcomes of its operations—particularly with regard to identifying its most cost-effective tax collection activities. Internal control standard: Effective management of an organization’s workforce—its human capital—is essential to achieving results and an important part of internal control. Management should view human capital as an asset rather than a cost. Only when the right personnel for the job are on board and are provided the right training, tools, structure, incentives, and responsibilities is operational success possible. Management should ensure that skill needs are continually assessed and that the organization is able to obtain a workforce that has the required skills that match those necessary to achieve organizational goals. Training should be aimed at developing and retaining employee skill levels to meet changing organizational needs. Qualified and continuous supervision should be provided to ensure that internal control objectives are achieved. Performance evaluation and feedback, supplemented by an effective reward system, should be designed to help employees understand the connection between their performance and the organization’s success. As a part of its human capital planning, management should also consider how best to retain valuable employees, plan for their eventual succession, and ensure continuity of needed skills and abilities. IRS’s operations cover a wide range of technical activities requiring specific expertise needed in tax-related matters; financial management; and systems design, development, and maintenance. Because IRS has tens of thousands of employees spread throughout the country, it is imperative that management keep its guidance up-to-date and its staff properly trained. Taking action to implement the following eight short-term recommendations in table 10 would assist IRS in its management of human capital. Internal control standard: Management should track major agency achievements and compare these to the plans, goals, and objectives established under the Government Performance and Results Act. IRS is responsible for developing and operating a system of internal control to ensure that it spends the billions of dollars appropriated to it each year for operations in accordance with the directions dictated by Congress. Implementing the following short-term recommendation in table 11 would improve IRS’s management and oversight of its performance against legal mandates and requirements. Increased budgetary pressures and an increased public awareness of the importance of internal control require IRS to carry out its mission more efficiently and more effectively while protecting taxpayers’ information. Sound financial management and effective internal controls are essential if IRS is to efficiently and effectively achieve its goals. IRS has made substantial progress in improving its financial management and internal control since its first financial audit, as evidenced by unqualified audit opinions on its financial statements for the past 10 years; resolution of several material internal control weaknesses, significant deficiencies, and other control issues; and actions taken resulting in the closure of hundreds of financial management recommendations. This progress has been the result of hard work by many individuals throughout IRS and sustained commitment of IRS leadership. Nonetheless, more needs to be done to fully address the agency’s continuing financial management challenges—resolving material internal control weaknesses; developing outcome-oriented performance metrics that can facilitate managing operations for outcomes; and correcting numerous other internal control issues. Effective implementation of the recommendations we have made and continue to make through our financial audits and related work could greatly assist IRS in improving its internal controls and achieving sound financial management. While we recognize that some actions—primarily those related to modernizing automated systems—will take a number of years to resolve, most of the open recommendations can be addressed in the short term. In commenting on a draft of this report, IRS expressed it’s appreciation for our acknowledgment of the agency’s progress in addressing its financial management challenges as evidenced by our closure of 18 open financial management recommendations from prior GAO reports. IRS also commented that it is committed to implementing appropriate improvement to ensure that it maintains sound financial management practices. We will review the effectiveness of further corrective actions IRS has taken or will take to address all open recommendations as part of our audit of IRS’s fiscal year 2010 financial statements. We are sending copies of this report to the Chairmen and Ranking Members of the Senate Committee on Appropriations; Senate Committee on Finance; Senate Committee on Homeland Security and Governmental Affairs; and Subcommittee on Taxation, IRS Oversight and Long-Term Growth, Senate Committee on Finance. We are also sending copies to the Chairmen and Ranking Members of the House Committee on Appropriations; House Committee on Ways and Means; the Chairman and Vice Chairman of the Joint Committee on Taxation; the Secretary of the Treasury; the Director of OMB; the Chairman of the IRS Oversight Board; and other interested parties. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-3406 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This appendix presents a list of (1) the 62 recommendations that we had not previously reported as closed, (2) Internal Revenue Service (IRS) reported corrective actions taken or planned as of April 2010, and (3) our analysis of whether the issues that gave rise to the recommendations have been effectively addressed. It also includes 41 recommendations based on our fiscal year 2009 financial statement audit. The appendix lists the recommendations by the year and recommendation number (ID no.) and also identifies the report in which the recommendation was made. For several years, we have reported material weaknesses, significant deficiencies, noncompliance with laws and regulations, and other control issues in our annual financial statement audits and related management reports. Appendix II provides summary information regarding the primary issue to which each open recommendation is most closely related. To compile this summary, we analyzed the nature of the open recommendations to relate them to the material weaknesses, compliance issue, and other control issues not associated with a material weakness identified as part of our financial statement audit. The Internal Revenue Service (IRS) has serious internal control issues that affected its management of unpaid tax assessments. Specifically, IRS (1) reported balances for taxes receivable and other unpaid assessments that were not supported by its core general ledger system for tax administration, (2) lacked a subsidiary ledger for unpaid tax assessments that would allow it to produce accurate, useful, and timely information with which to manage and report externally, and (3) experienced errors and delays in recording taxpayer information, payments, and other activities. Serious weaknesses in IRS’s internal control over information security continue to jeopardize the confidentiality, availability, and integrity of information processed by IRS’s key systems, increasing the risk of material misstatement for financial reporting. For example, IRS has not restricted users’ ability to bypass application controls, removed separated employees’ systems access in a timely manner, or restricted system access to only those who needed it. These unresolved weaknesses increase the risk that data processed by the agency’s financial management systems are not reliable. Although IRS has made some progress in addressing previous weaknesses we identified in its information systems and physical security controls, as of March 2010, there were 88 open recommendations designed to help IRS improve its information systems security controls. Those recommendations are reported separately and are not included in this report primarily because of the sensitive nature of some of the issues. IRS continues to be noncompliant with the laws and regulations governing the release of federal tax liens. IRS did not always release applicable federal tax liens within 30 days of tax liabilities being either paid off or abated, as required by the Internal Revenue Code (section 6325). The Internal Revenue Code grants IRS the power to file a lien against the property of any taxpayer who neglects or refuses to pay all assessed federal taxes. The lien serves to protect the interest of the federal government and as a public notice to current and potential creditors of the government’s interest in the taxpayer’s property. The 70 recommendations listed below pertain to issues that do not rise individually or in the aggregate to the level of a material weakness or noncompliance with laws and regulations. However, these issues do represent weaknesses in various aspects of IRS’s control environment that should be addressed. In addition to the contact named above, the following individuals made major contributions to this report: William J. Cordrey, Assistant Director; Russell Brown; Ray B. Bush; Nina Crocker; Oliver Culley; Doreen Eng; Charles Fox; Valerie Freeman; Jamie Haynes; Ted Hu; Richard Larsen; Delores Lee; Julie Phillips; John Sawyer; Christopher Spain; Cynthia Teddleton; LaDonna Towler; and Gary Wiggins.
In its role as the nation's tax collector, the Internal Revenue Service (IRS) has a demanding responsibility to annually collect trillions of dollars in taxes, process hundreds of millions of tax and information returns, and enforce the nation's tax laws. Since its first audit of IRS's financial statements in fiscal year 1992, GAO has identified a number of weaknesses in IRS's financial management operations. In related reports, GAO has recommended corrective actions to address those weaknesses. Each year, as part of the annual audit of IRS's financial statements, GAO makes recommendations to address any new weaknesses identified and follows up on the status of IRS's efforts to address the weaknesses GAO identified in previous years' audits. The purpose of this report is to (1) provide an overview of the financial management challenges still facing IRS, (2) provide the status of financial audit and financial management-related recommendations and the actions needed to address them, and (3) highlight the relationship between GAO's recommendations and internal control activities central to IRS's mission and goals. IRS has made progress in improving its internal controls and financial management since its first financial statement audit in 1992, as evidenced by 10 consecutive years of clean audit opinions on its financial statements, the resolution of several material internal control weaknesses, and actions resulting in the closure of over 250 financial management recommendations. This progress has been the result of hard work throughout IRS and sustained commitment at the top levels of the agency. However, IRS still faces significant financial management challenges in (1) resolving its remaining material weaknesses in internal control, (2) developing outcome-oriented performance metrics, and (3) correcting numerous other internal control issues, especially those relating to safeguarding tax receipts and taxpayer information. At the beginning of GAO's audit of IRS's fiscal year 2009 financial statements, 62 financial management-related recommendations from prior audits remained open because IRS had not fully addressed the issues that gave rise to them. During the fiscal year 2009 financial audit, IRS took actions that GAO considered sufficient to close 18 recommendations. At the same time, GAO identified additional internal control issues resulting in 41 new recommendations. In total, 85 recommendations remain open. To assist IRS in evaluating and improving internal controls, GAO categorized the 85 open recommendations by various internal control activities, which, in turn, were grouped into three broad control categories: safeguarding of assets and security activities; proper recording and documenting of transactions; and effective management review and oversight. The continued existence of internal control weaknesses that gave rise to these recommendations represents a serious obstacle that IRS needs to overcome. Effective implementation of GAO's recommendations can greatly assist IRS in improving its internal controls and achieving sound financial management and can help enable it to more effectively carry out its tax administration responsibilities. Most can be addressed in the short term (the next 2 years). However, a few recommendations, particularly those concerning the functionality of IRS's automated systems, are complex and will require several more years to effectively address. GAO is not making any recommendations in this report. In commenting on a draft report, IRS stated that it is committed to implementing appropriate improvements to maintain sound financial management practices.
The Housing and Community Development Act of 1974, a major overhaul of housing laws, created the tenant-based and project-based Section 8 rental assistance programs for low-income households. The tenant-based program (now called Housing Choice Vouchers) provides rental assistance to eligible households to rent houses or apartments in the private market from landlords who are willing to accept the vouchers. Under the project- based rental assistance program, HUD enters into contracts with property owners to provide rental assistance for a fixed period of time. The project-based Section 8 program has multiple subprograms, including Section 8 New Construction and Substantial Rehabilitation, Loan Management Set-Asides, Preservation, and Property Disposition. Rental assistance under these project-based Section 8 subprograms has been generally used in conjunction with other public funding. For example, a Section 8 New Construction/Substantial Rehabilitation property could have been financed by a Federal Housing Administration (FHA) insured loan, a Section 202 direct loan, a U.S. Department of Agriculture Section 515 direct loan, or state housing finance agency bonds. Some of these programs provided financing for the construction or rehabilitation of affordable rental housing prior to the 1974 Act. (See table 1). Project-based Section 8 assistance may be provided only for tenants with incomes no greater than 80 percent of an area’s median income. Tenants generally pay rent equal to 30 percent of adjusted household income. As part of the Section 8 contract, property owners and managers are responsible for ensuring that households meet program eligibility requirements and calculating households’ payments. HUD pays rent subsidies directly to the property owners but does not pay them a separate administrative fee. The owners’ include their administrative costs in their HUD-approved rents. Project-based Section 8 properties are subject to physical and management reviews. Most Section 8 contracts also require the submission of annual financial reports from property owners. These reviews and reports are to ensure management accountability and the physical condition of public and assisted housing. HUD’s Real Estate Assessment Center (REAC) conducts physical inspections of all HUD multifamily properties every 1 to 3 years, depending on the property’s previous physical inspection score. Project-based Section 8 properties are subject to annual management and occupancy reviews to verify compliance with the terms of the project-based Section 8 contracts, regulatory and management agreements, and management plans. In the mid- to late-1990s, Congress and HUD made several important changes to the duration of housing assistance contracts, contract rents, and management of on-going contracts. In the mid-1990s because of budgetary constraints HUD shortened the terms of subsequent renewals after the initial 15- to 40-year terms began expiring. HUD generally reduced the contract renewal terms to 1 or 5 years, with the funding renewed annually subject to appropriations. In 1997, Congress passed the Multifamily Assisted Housing Reform and Affordability Act (MAHRA) to ensure that the rents HUD subsidized remained comparable with market rents. Over the course of the initial contracts with owners, contract rents in some cases had begun to substantially exceed local market rents as market conditions changed. MAHRA generally requires an assessment of each property when it nears the end of its original contract term to determine whether the contract rents are comparable to current market rents and whether the property has sufficient cash flow to meet its debt and daily and long-term operating expenses. However, certain projects are exempt from the market comparability requirement (e.g., projects financed by state agency bonds). If the contract rents are higher than market rents, HUD can decrease the contract rents to market rents upon renewal. Conversely, if the expiring contract rents are below market rates, HUD may increase the contract rents to market rates upon renewal. In 1999, because of staffing constraints (primarily in HUD’s field offices) and the workload involved in renewing the increasing numbers of rental assistance contracts reaching the end of their initial terms, HUD began an initiative to contract out the oversight and administration of most of its project-based contracts. The entities that HUD hired—typically public housing authorities or state housing finance agencies—are responsible for conducting on-site management reviews of assisted properties; adjusting contract rents; reviewing, processing, and paying monthly vouchers submitted by owners; renewing contracts with property owners; and responding to health and safety issues at the properties. These performance-based contract administrators (PBCA) now administer the majority of project-based Section 8 contracts. In the late 1980s, initial Section 8 contracts began expiring; by 2003, all of the original 20-year contracts had expired. Forty-year contracts will expire between 2014 and 2023. Section 8 owners are offered six options upon contract expiration. According to the HUD Section 8 Renewal Guide, Section 8 owners mayrenew without any modifications, with rents capped at HUD’s market levels; renew with rents that are elevated to market rents through the Mark-up-to- Market program; renew with rents that are reduced to market rents through the Mark-to- Market program; renew as a Section 8 “exception project;” renew as a Section 8 preservation or portfolio reengineering demonstration projects; and opt out of the Section 8 contract. When their contract expires, project-based Section 8 owners may decide not to renew their Section 8 contracts and convert their units from affordable housing to market rents. Once owners remove their properties from HUD programs, Section 8 households receive enhanced vouchers as long as they remain in their units. Owners are required to give both tenants and HUD notice of their intention to renew or opt out 1 year before the Section 8 contract’s expiration (see fig. 1). An owner who intends to opt out must also provide HUD with a 120-day notification. An owner who intends to renew is required to submit to HUD or the PBCA a request for contract renewal and a rent comparability study (when required) at least 120 days before the contract expires. Local HUD offices review the study to determine if the property’s current rents are at, above, or below market rates. If rents are at or below market rates, HUD field office staff will make any necessary adjustments and execute a new Section 8 contract. If rents are above market, HUD staff renews the contract (at above-market rents) for up to 1 year and forward the owner’s submission to the HUD Office of Affordable Housing Preservation (OAHP) for a Mark-to-Market restructuring. OAHP assigns properties to participating administrative entities (PAE) to carry out restructurings under the Mark-to-Market program on behalf of HUD. The owner then signs a renewal contract with the contract administrator. In a January 2004 report, we found that state and local agencies offer incentives to preserve affordable housing, including project-based Section 8 housing. Some of these agencies perceived that the information on opt-outs was not readily available. In this report, we recommended that HUD make this information more widely available and useful. States and localities may use funds provided by other federal programs to subsidize housing for low-income tenants. The HOME program, authorized by the Cranston-Gonzalez National Affordable Housing Act, is the primary block grant program that state and local governments use to develop affordable housing. Under the Low-Income Housing Tax Credit (LIHTC) Program, authorized by the Tax Reform Act of 1986, state housing finance agencies provide federal tax incentives to private investors to develop housing affordable to low-income tenants. Some states and localities have established housing trust funds and other financial mechanisms that have helped organizations acquire HUD properties and maintain their affordability to low-income tenants when owners want to sell properties and exit the program. Federal housing programs serve many different types of households and provide units that are affordable at different income levels. For example, under the LIHTC program, either 20 percent of units must be affordable to households with incomes of less than 50 percent of area median household income, or 40 percent of units must be affordable to households earning incomes less than 60 percent of the area median income. HUD pays assistance for project-based Section 8 units on behalf of tenants with incomes no greater than 80 percent of area median income. Further, the states and localities may use other tools and incentives, such as offering property tax relief, to encourage owners to keep serving low- income tenants. We found a number of patterns in the volume, characteristics, and locations of HUD’s project-based Section 8 housing contract renewals and terminations, from 2001 through 2005. First, from 2001 through 2005, 92 percent of project-based Section 8 housing assistance contracts and 95 percent of assisted units that were eligible for renewal were renewed. We also found that the percentages of opt-outs, foreclosures, and enforcements varied by project-based Section 8 subprogram. Relatively few owners opted out of the Section 8 program, and of those we interviewed, most reported that they did so to seek higher rents in the private rental market or to convert their units into condominiums. Second, we found that opt-outs shared other characteristics, such as property size and physical condition. Finally, opt-outs were more prevalent in some regions and localities. From 2001 through 2005, 14,373 of the 24,000 project-based contracts and 982,701 of the 1.4 million units were determined to be eligible for renewal or termination. Of these, 92 percent of the eligible contracts and 95 percent of the eligible units remained in the program (table 2). The percentage of opt-outs while small overall, varied by subprogram. As shown in figure 2, only 1 percent of project-based Section 8 contracts whose owners financed the properties through the Section 202 program opted out from 2001 through 2005. This percentage is generally low largely because Section 202 property owners are nonprofit entities established for the singular purpose of providing housing for the elderly or persons with disabilities, and because the statute requires low income use at least through the original term of the loan. As a result, it is in the owners’ interest to renew their project-based Section 8 contracts. Similarly, Section 8 contracts that also carry a U.S. Department of Agriculture Section 515 mortgage had a much lower percentage of opt-outs (3 percent), in part due to mortgage prepayment restrictions. Conversely, contracts listed under Property Disposition, which are troubled properties, had the highest percentage of opt-outs, foreclosures, and enforcements. In total, of the 8 percent of contract terminations, 6 percent were due to opt- outs and 2 percent were due to contract foreclosures and enforcements. As shown in figure 3, the total number of project-based Section 8 contract opt-outs nationwide declined from 240 in 2001 to 120 in 2003, but increased slightly in 2004 to 125 and increased further in 2005 to 160. Conversely, the number of foreclosures and enforcements has continued to decline slightly over the period. The properties that owners withdrew from the program shared similar characteristics. Specifically, owners with properties that were generally not fully subsidized by the program, were family occupied, were for profit, or were in poor physical condition had a higher percentage of opt-outs. Conversely, we did not find substantial differences in the percentage of opt-outs based on property size, meaning owners with fewer units were as likely to opt out as owners with more units. Properties that were only partially supported by the Section 8 program comprised 4,492, or 33 percent, of the total 13,847 Section 8 properties that renewed or terminated their contracts from 2001 through 2005. As shown in figure 4, about 13 percent of those properties with a less than 50 percent Section 8 subsidy level that were eligible to opt out during the 5-year period from 2001 through 2005 did so, compared with about 4 percent of the properties that were fully supported by the Section 8 program. Owners with properties with subsidy levels between 50 and 97 percent were as likely to remain in the program as those that were fully supported. These results were consistent with the views of owners about their desire to continue receiving guaranteed payments that Section 8 provides. About 2 percent of all partially and nearly fully subsidized properties were terminated through foreclosures or enforcements actions. A higher percentage of properties identified as renting to families left the project-based Section 8 program than properties rented to the elderly and persons with disabilities. As shown in figure 5, 9 percent of family- occupied properties opted out of the program from 2001 through 2005 compared to about 2 percent for properties identified as renting to the elderly and persons with disabilities. The lower opt-out percentage for properties renting to the elderly and persons with disabilities can be attributed largely to the fact that many were financed through Section 202. As stated earlier, Section 202 owners find it is in their interests to continue to serve the very-low income elderly and persons with disabilities. Moreover, properties for the elderly and persons with disabilities are generally owned by non-profit entities and have use restrictions which require their low-income use through the terms of the properties’ original loan. Our analysis also found that family-occupied properties also experienced a slightly higher percentage of foreclosures/enforcements than properties for the elderly and persons with disabilities. For-profit and limited-dividend property owners had a higher percentage of opt-outs than other types of property owners. Limited-dividend ownerships are formed under federal or state laws or regulations and can have restrictions involving rents, charges, capital structure, rate of return, or methods of operations. As shown in figure 6, collectively these two types of property owners represented 57 percent of all project-based Section 8 properties and had the highest percentage of opt-outs, at 8 and 6 percent, respectively. Conversely, nonprofit owners had the lowest percentage of opt-outs at 2 percent. The percentage of foreclosures and enforcement actions for nonprofits was also slightly lower than for all other types of ownerships. When properties repeatedly fail physical inspections, HUD officials told us that they take action to protect the tenants by issuing vouchers and terminating the Section 8 contract. The officials noted that in many cases these owners wish to be relieved of HUD oversight and may believe they can do so by failing to meet HUD requirements. HUD reviews each such case and may take punitive enforcement action against the owner. These owners are more likely to opt out. Physical REAC inspection scores reflect as-is condition with negative adjustments for certain health and safety issues. Figure 7 shows that 94 percent of the properties received passing scores, with 50 percent of the properties receiving superior scores of over 89 and 44 percent receiving satisfactory scores (60-89). Also, as shown in figure 7, the percentage of opt-outs for properties with substandard or severe scores was substantially higher than the percentages of opt-outs for properties with satisfactory or superior scores. Our analysis of HUD data shows that the percentage of opt-outs varies slightly by region. Certain parts of the country had more opt-outs than other regions (fig. 8). Several southern states and New England experienced the smallest percentage of opt-outs. Appendix III and IV contain analyses of the number of opt-outs by state and the 3 regions with the highest number of opt-outs, by metropolitan areas. Figure 9 shows the national average for opt-outs and states we visited that experienced a higher percentage of opt-outs compared with the national average. Consistent with the HUD commissioned study by Econometrica, Inc., property owners and others we interviewed reported that the location of the property and the changes in the valuation of the neighborhood greatly influenced the owner’s decision to remain or leave the Section 8 program. For example, properties located in neighborhoods with higher median incomes, higher median rent levels, and lower poverty and vacancy rates had higher opt-outs as a percentage of all active Section 8 units. Nationwide, over 50 percent of the opt-outs were in metropolitan areas with a million or more residents. HUD offers a number of tools and incentives to property owners seeking additional funding to support their Section 8 properties. HUD reports that when owners do choose to use the HUD incentives offered, they most often select the Mark-to-Market and the Mark-up-to-Market programs. To a lesser extent, some Section 8 owners are also eligible to participate in the Section 236 decoupling program and the Section 202 refinancing program to obtain additional funding for rehabilitation. However, because these programs are available to only a portion of project-based Section 8 owners and funding for rehabilitation is limited, project-based Section 8 owners also use funds from programs outside of HUD for property rehabilitation. HUD officials, owners, and industry representatives have told us that Section 8 owners often opt to use non-HUD programs such as LIHTC and tax-exempt bonds, which the IRS administers mostly through state housing finance agencies. Both LIHTC and tax exempt bonds may be combined with HUD incentives to maintain housing at rents affordable to low-income households, but limited data is available to show how often owners make this choice. The Mark-to-Market Program, which may consist of a full or “lite” restructuring, often provides an incentive for owners with rents above the market rate to remain in the Section 8 program. Owners that have a contract with the project-based Section 8 program and mortgages that are insured by FHA or held by HUD must participate in the program if their rents exceed the prevailing market level (as determined by HUD). Through a full Mark-to-Market restructuring, the owner is able to finance rehabilitation needs, cover projected operating expenses, and, in some cases, enhance the property’s reserve fund to address future capital improvement needs. In exchange for choosing a full Mark-to-Market restructuring, owners virtually always receive a new project-based Section 8 contract with HUD and execute a Use Agreement to maintain the property as affordable housing for at least 30 years. Owners of FHA-insured properties with above-market rents may request to participate in Mark-to-Market lite. This option involves only rent restructuring rather than a full mortgage restructuring and is typically used when owners can reasonably cover all of their expenses at the reduced rents and still maintain an affordable mortgage payment. In addition to lower rents, these owners generally renew their contracts for 5 years and remain eligible to participate in a Mark-to-Market full restructuring at a later date. According to HUD, Mark-to-Market lite is generally used for properties in better financial and physical condition and rents that are only slightly higher than market rents. Between 2001 and 2005, owners who renewed their contracts using HUD incentives chose this option less often than the full restructurings. The Mark-to-Market program was scheduled to expire in October 2006. However, the Revised Continuing Appropriations Resolution of 2007 extended the program for an additional 5 years (through September 2011). In addition, the House and Senate introduced the Mark-to-Market Extension Act of 2007 in January 2007. If enacted, the act would (1) expand the existing Mark-to-Market authorities to provide for higher rents for eligible properties damaged by disasters, (2) expand the program’s authority to set rents above existing rent level limits, (3) increase to 5 years the period during which HUD may provide for not- for-profit debt relief, and (4) allow a limited number of projects with rents below market to be eligible for a Mark-to-Market restructuring. Owners with below-market rents may participate in the Mark-up-to-Market program, which permits them to raise rents to either market rates or 150 percent of the HUD-determined fair market rent, whichever is less. The program provides additional rental revenue for property operations and renovation and increased distributions to owners of limited-dividend projects. Typically, Mark-up-to-Market transactions occur in rental markets with escalating rents that have exceeded HUD’s established rent levels for area properties. The program’s goal is to encourage owners to renew their contracts and remain in the Section 8 program by removing the economic incentive to opt out. HUD also has a Mark-up-to-Budget Program, which is a variation of the Mark-up-to-Market program and has been used as an incentive for nonprofit owners to preserve Section 8 properties with below-market rents. The nonprofit owners must justify higher rents based on their operating budget and repair costs. Under this program, HUD permits a Section 8 budget-based rent increase for nonprofit properties to perform capital improvements that will maintain the long-term financial and physical viability of the property when current rents are not sufficient. According to HUD, Mark-Up-to-Budget may be used by a nonprofit to either facilitate a purchase transaction or finance needed repairs. HUD has offered a number of other incentives to preserve affordable housing, such as the Section 236 decoupling, Section 202 refinance, and HOME programs, but only certain properties in the project-based Section 8 portfolio are eligible to take advantage of these incentives. Under Section 236 of the National Housing Act, HUD provides a monthly Interest Reduction Payment (IRP) subsidy to reduce the mortgage interest rate paid by property owners effectively to 1 percent. The Section 236 decoupling program allows leveraging of the IRP to benefit the owner and the property and to provide funds for rehabilitation. For example, we visited a nonprofit’s 72-unit Section 8 property in Baltimore that according to the property manager had not undergone a major renovation in more than 30 years. Because the property had a Section 236 mortgage and project-based Section 8, the owner will be eligible to participate in the Section 236 Decoupling program. Through the 236 Decoupling program, the owner was able to receive additional funds to make necessary repairs to the property and to begin construction of a new community center. HUD also administers a Section 202 refinancing program that allows owners to refinance their direct HUD loans while maintaining their Section 8 rent levels. According to HUD’s August 2006 Report to Congress, the Section 202 refinancing program was used sparingly from 2001 through 2005, but activity in the program increased significantly during fiscal year 2006. In exchange for the refinancing, owners must agree to maintain affordable occupancy restrictions, comply with HUD requirements, and undertake appropriate rehabilitation of the property. HOME is the largest federal block grant to state and local governments and is designed exclusively to create affordable housing for low-income households. Each year the program allocates approximately $2 billion among the states and hundreds of localities nationwide. While HUD does not maintain data on the number of project-based Section 8 properties that use HOME funding, HUD officials have indicated that HOME funds have been used as an incentive to keep project-based Section 8 owners in the program. HUD officials, property managers, and industry groups told us that project- based Section 8 owners also combine HUD preservation tools and incentives with non-HUD preservation tools such as the LIHTC and tax- exempt bonds to provide additional funds for rehabilitation. LIHTC and tax-exempt bonds can be used by themselves or with HUD incentives such as Mark-to-Market to provide the Section 8 owner with funding for substantial rehabilitation and repairs while keeping the property affordable for low-income tenants. By combining incentives, the owner would have enough resources for capital improvements while at the same time ensuring that the property remained affordable through use- agreements for at least 30 years. However, because LIHTC and tax-exempt bonds are administered by state and local housing and finance agencies, HUD does not consistently collect data on the number of Section 8 properties using these incentives. According to HUD officials, industry groups, and owners, project-based Section 8 owners often use LIHTC to provide additional funding for rehabilitation. To be eligible for consideration under the LIHTC, a proposed property must: be a residential rental property; commit to one of two possible low-income occupancy threshold restrict rents, including utility charges, in low-income units; and operate under the rent and income restrictions for 30 years or longer in accordance with written agreements with the agency issuing the tax credits. State and local housing finance agencies also sell tax-exempt housing bonds (commonly known as Mortgage Revenue Bonds and Multifamily Housing Bonds) and use the proceeds for several purposes. These include financing low-interest mortgages for low- and moderate-income homebuyers and acquiring, constructing, and rehabilitating multifamily housing for low-income renters, including Section 8 properties. While most owners renewed their contracts, some told us that they had concerns with certain HUD policies and practices. Some described multiple frustrations that led to what they and industry representatives called “HUD fatigue.” They said that frustrations with HUD could result in owners opting out of their contracts even when doing so might not be in their economic interest. Among the frustrations they discussed were HUD’s one-for-one replacement policy for Section 8 units; policies and procedures that could lead to economic distress, especially Operating Cost Adjustment Factors (OCAF) payments; and a lack of clarity and consistency on HUD’s part in applying policies. We found that the one-for- one replacement policy, in particular, resulted in a loss of some properties and higher vacancy rates that could potentially lead to foreclosure. Industry representatives whom we interviewed agreed that HUD could improve its policies and procedures for project-based Section 8 housing, and both industry representatives and owners offered suggestions for steps HUD could take to improve preservation efforts. In the locations we visited, we spoke to owners and managers who either renewed their project-based Section 8 contract or decided to opt out of the program. Of those owners and managers who decided to remain in the program, many told us that their primary motivation was the guaranteed rental income that the Section 8 subsidy provided. Some of the managers in depressed rental markets in the locations we visited told us that they would be unable to fill units or would have high vacancy rates if they were to opt out of the Section 8 program. As we have seen, nonprofit owners rarely decided to opt out of the Section 8 program and told us that they stayed in the program because their mission was to provide affordable housing. Generally, Section 8 owners and property managers in the locations we visited said that HUD did not encourage them to opt out of the Section 8 program. Rather, most stated that HUD tried to keep them in the program by using various tools and offering incentives, such as the Mark-to-Market and Mark-up-to-Market programs. HUD officials also stated that although their goal was to preserve as many project-based Section 8 housing units as they could, the final decision on whether to renew or opt out was made by the owner and in most cases was driven by market factors that were beyond HUD’s control. Some owners who left the program said that their decision was based on economic or market factors and not on dissatisfaction with HUD. Nonetheless, many of the owners (both those that remained in and those that had left the Section 8 program), managers, and industry representatives with whom we spoke cited areas in which the Section 8 program could be improved. Owners and managers expressed concerns regarding specific HUD policies and practices that could result in opt-outs, foreclosures, or cause financial distress or that lacked clarity and consistent application. Figure 10 illustrates project-based Section 8 owners’ frustrations with HUD that have caused opt-outs in the past or could possibly increase the number of future opt-outs. As shown in the graphic, although the majority of the opt-outs occur for economic or market factors, growing owner frustration could upset the balance causing more owners to consider opting out even when economic conditions could be overcome or mitigated. Some owners, managers, and industry representatives told us that some HUD practices have not always kept pace with changes in market conditions. For example, some owners told us that HUD required a one- for-one replacement policy for Section 8 units when owners renewed their contracts. That is, HUD generally does not allow owners to reduce the number of project-based Section 8 units or to reconfigure the units to better meet market demand, even when the alternative could result in owners opting out and removing all of their units from the program. HUD officials told us that although there was no statutory requirement for one-for-one replacement of project-based Section 8 units, the unwritten policy had been to require replacement of units in all cases. HUD officials said that they based this policy on the public housing requirement set by the Housing and Community Development Act of 1987. However, Congress waived the one-for-one replacement requirement for public housing units from 1995 through 1998, and the Quality Housing and Work Responsibility Act of 1998 permanently eliminated it for public housing. HUD officials said that their rationale for maintaining their policy was that many of the properties had long waiting lists and that any reductions in the number of available units was counter to a demonstrated need for affordable housing. Some owners, managers, and industry representatives pointed to the one- for-one replacement requirement for all units as an example of one of their frustrations with HUD policies. Some owners told us that HUD would not allow them to reduce the number of Section 8 units in a property or reconfigure the units to better meet market demand, even when some types of units had high vacancy rates and other types had long waiting lists. The requirement was particularly troublesome for owners of units containing efficiency apartments, which in some areas were not in high demand. These owners wanted to replace the efficiency apartments with fewer one-bedroom units, which were in demand. For example, one nonprofit that primarily serves the elderly told us that even though the HUD field office approved a transaction converting efficiencies into fewer one-bedroom units for one of their properties, HUD headquarters reversed that decision based on its one-for-one replacement policy. Also, a member of the National Affordable Housing Management Association (NAHMA), an association that represents property management agents, told us that the owners of an Iowa property rented to elderly tenants had difficulties filling efficiency units. NAHMA officials said that one of the owner’s major obstacles in converting to one-bedroom units was getting HUD’s approval to waive the one-for-one replacement policy. This lack of flexibility on the part of HUD in insisting upon one-for-one replacement, rather than—for example—evaluating each case on its own merits, could hinder the preservation of certain project-based Section 8 units. In at least one case, a property owner left the project-based Section 8 program because the owner could not convert some units into market-rate housing. The owners of a property in Chicago wanted to split their Section 8 contract and convert 3 of the 82 units to condominiums, preserving the rest as Section 8. According to the owners, splitting the contract made sense because the three units were in a building that was separate from the remaining 79 units. HUD’s Chicago Field Office told the owners that they could not split the Section 8 contract because of the one-for-one replacement policy. As a result, the owners opted out, and all 82 units left the Section 8 program. Other industry groups, including NAHMA, the National Leased Housing Association, and the law firm of Nixon Peabody, which represents owners and managers, also agreed on the need for HUD to adapt to changing market conditions in reconfiguring Section 8 units. These representatives told us that some of their transactions involving project-based Section 8 units were being held up by issues relating to reducing the number of unmarketable efficiencies or reconfiguring other Section 8 units. HUD headquarters officials told us that they were aware of the problem and that they were rethinking their policy, particularly as it applied to units for elderly tenants, but were concerned about setting precedent for owners to request unit reductions even when the market factors were not an issue. HUD officials told us that they had initially planned to focus on providing flexibility to elderly developments affected by the one-for-one replacement policy. But the officials added that they had seen the need to assess the impact that the one-for-one replacement policy was having on family properties as well. Nevertheless, not allowing owners to reconfigure the number of units in their Section 8 contract in certain cases could result in some owners deciding to opt out of the Section 8 contract altogether. Some of the owners, managers, and industry representatives told us that the OCAF inflation adjustments that owners are entitled to receive every year are not timely, equitable, or responsive to price hikes or emergency situations. OCAF adjustments are calculated by HUD annually using nine expense categories, including utilities, property taxes, and insurance that are aggregated at the state level. Section 524 of MAHRA gives HUD broad discretion in setting OCAF adjustments, with one exception: that application of an OCAF adjustment will not result in a negative rent adjustment. Owners, managers, and industry representatives were concerned that the OCAF adjustments were not made on a timely basis. According to a number of industry groups, the adjustments are often obsolete by the time they are adopted. HUD officials confirmed that there was a lag of about 15 to 18 months from the time that HUD collected the data to the time that the adjustments became effective. One industry representative told us that HUD was unable to revise the adjustments to respond to any cost hikes during the lag time period. Some of the owners and the industry representatives also told us that they were concerned with the unequal distribution of OCAF adjustments within states. Some owners and industry representatives pointed out that the formula HUD used did not take into account differences in markets within states for commodities such as electricity and insurance. They said that in some markets, the cost of utilities and insurance often escalated monthly, while in other areas this cost was relatively stable. For example, a property manager in New York City told us that it did not seem equitable to have the same OCAF adjustment for New York City, where costs were extremely high and likely to fluctuate precipitously, as for upstate New York, where costs were much lower. Some of the owners, managers, and industry representatives that we talked to also said that OCAF adjustments were not able to respond to price hikes or emergency situations in many parts of the country. For example, a member of NAHMA that managed elderly developments in Iowa told us that the OCAF adjustments during the last 4 years had been too small given the rapid escalation of natural gas rates in that region of the country. As a consequence, the management company had to use capital reserves to address operating cash deficits, putting it at risk of being unable to cover unexpected capital repairs. Another NAHMA member that managed a 120-unit project-based Section 8 property for the elderly in Minnesota said that heating costs had increased 22 percent in 2006 over the previous year but that the OCAF adjustment for 2006 was only 2.8 percent. NAHMA officials said that rising utility costs had become an enormous challenge for many Section 8 owners. In particular, NAHMA officials noted that HUD needed a more timely mechanism to address emergency operating cost increases—for example, after natural disasters. Officials from Stewards of Affordable Housing, a group representing some of the largest nonprofits that own and manage project-based Section 8 properties, also stated that OCAF adjustments did not keep up with inflation. For instance, a 2006 survey of members of the Florida Association of Homes for the Aging and the Southeastern Affordable Housing Management Association reported that none of the respondents had had an insurance premium increase of less than 50 percent between 2005 and 2006. Further, the survey found that, on average, premiums had doubled in one year, and one respondent reported a tenfold increase in its insurance premium. HUD officials, including the Deputy Assistant Secretary for Multifamily Housing, said that they were aware of the lag in OCAF adjustments, the equity concerns, and the difficulties in responding to price hikes and emergency situations. However, they said that HUD was taking steps to address these issues. In the short term, HUD officials said that they were allowing owners to tap into their capital reserve accounts to cover unforeseen operating cost increases. However, this practice works only as long as reserves are available or future OCAF adjustments are guaranteed. In the long run, HUD officials plan to evaluate ways to change the OCAF adjustment factors and make them responsive to market factors. To deal with the issue of market differences within a state, HUD is currently considering a proposal to make adjustments to OCAF using data from metropolitan areas instead of states. HUD officials said that they were also considering an industry group’s proposal to address owners’ concerns about price hikes and emergency situations. The proposal would authorize owners to borrow against future rent adjustments using their capital reserve accounts as collateral. Owners and industry groups contended that if HUD neglected to revise the OCAF adjustment process, owners in high-cost areas or those experiencing emergency cost escalations might not receive enough in subsidies to meet their expenses and could consider opting out of the program. Some owners, managers, and industry groups expressed concerns that some HUD policies and procedures could affect the owners’ cash flows and undermine their abilities to undertake needed rehabilitation of their properties. Among these were (1) late subsidy payment to owners, (2) high administrative costs relative to the number of Section 8 units in a property, (3) confusion about the Limited English Proficiency requirement, and (4) unclear or vague HUD policies and procedures. Several owners and HUD staff told us that project-based Section 8 Housing Assistance Payments were frequently late, especially when HUD was under continuing resolutions. In November 2005, we reported that from fiscal years 1995 through 2004, HUD disbursed three-fourths of its monthly Section 8 payments on time but that thousands of payments were late each year. Owners who are heavily reliant on HUD’s subsidy to operate their properties are the most likely to be severely affected by payment delays. Owners reported receiving no warning from HUD when payments would be delayed and reported that such notification would allow them to mitigate the effects of a delay. In our November 2005 report, we recommended that HUD, among other things, streamline and automate the contract renewal process to prevent processing errors and delays and eliminate paper/hard-copy requirements to the extent practicable; develop systematic means to better estimate the amounts that should be allocated and obligated to project-based Section 8 payment contracts each year; monitor the ongoing funding needs of each contract; ensure that additional funds were promptly obligated to contracts when necessary to prevent payment delays; and notify owners if their monthly payments would be late, including in such notifications the date when the monthly payment would be made. In response to the report, HUD officials said that they would take actions to better predict the funding allocation process and develop a system to more promptly notify owners when payments were expected to be late. Owners told us that when they did not receive payments on time, they often had to use reserve funds to cover critical operating expenses, leading to cash flow problems. During these periods, some owners delayed needed maintenance to make up for the budget shortfall. For example, we found in our work for this current report that in Baltimore, a nonprofit owner of a project-based Section 8 property for elderly residents delayed critical repairs to the boiler system when the payments were delayed. The owner used reserve funds that should have been used for repairs to cover operating costs. This situation contributed to a lower physical REAC score for the owner because the boiler was in need of repair. HUD headquarters officials told us that they had created a working group of HUD officials and industry representatives that would provide recommendations to HUD for improving its budget process to reduce late Section 8 payments. HUD officials said that they require the same information and documentation from all owners, no matter how many Section 8 units they own. Therefore, owners with a few Section 8 units may find the administrative costs of participating in the program burdensome. Some of the property owners we met with confirmed this fact. HUD officials said that owners with larger numbers of Section 8 units were able to spread the fixed administrative costs across more units and achieve economies of scale. Most of these owners’ expenditures went to hire dedicated staff to manage the program, which requires separate accounting, management, occupancy, and oversight systems. The owners said that they were also incurring costs for background checks on Section 8 applicants and annual tenant recertifications. For example, in Columbus, Ohio, a manager told us that an owner with a few Section 8 units decided to opt out in 2002 because of the high administrative costs of keeping 24 Section 8 units in a development that had a total of 141 units. The manager said that by opting out, the owner saved up to $25,000 in payroll costs and was still able to keep the majority of the tenants who were eligible to receive Section 8 incentives through tenant vouchers administered by the local public housing agency. HUD field office staff in Columbus told us that for some owners who had few Section 8 subsidized units, keeping separate financial, management, and occupancy records for both Section 8 and other tenants might not be feasible. The January 2006 HUD commissioned study by Econometrica, Inc., reported a similar finding. The study noted that owners with a smaller portion of their portfolio in Section 8 units incurred additional operating costs for maintaining staff members with the skills needed to administer the Section 8 program. The study concluded that operating a Section 8 property required administrative skills specific to the program and it might not be economically feasible for these owners to employ staff members with the needed skills. There is some concern and confusion among project-based Section 8 owners and managers on what is required of them to comply with their obligations to persons with limited English proficiency. Under Title VI of the Civil Rights Act of 1964, and its implementing regulations, recipients of federal financial assistance have a responsibility to ensure meaningful access to programs and activities for these individuals. Presidential Executive Order 13166, “Improving Access to Services to Persons with Limited English Proficiency” directs each federal agency that extends assistance subject to Title VI to publish guidance for its recipients clarifying their obligations to persons with limited English. HUD published the final “Guidance to Federal Financial Assistance Recipients Regarding Title VI Prohibition against National Origin Discrimination Affecting Limited English Proficient Persons” on January 22, 2007. Under this guidance, recipients of HUD funds use four factors to determine the extent of their obligations to provide services to those with limited English proficiency. These four factors include: (1) the number or proportion of such persons who are eligible to be served or likely to be encountered by the program or grantee, (2) the frequency with which these persons come in contact with the program, (3) the nature and importance of the program, activity, or service provided by the program to people’s lives, and (4) the resources available to the grantee/recipient and costs. Based on these factors, a HUD recipient would develop an implementation plan to address the identified needs of the populations they serve that have limited English proficiency. Some owners, managers, and their representatives said that they agreed with the goal that this group have access to HUD programs but that it was not clear how HUD was implementing this order. Particularly, these officials were concerned with the lack of clarity in describing the written translations and oral interpretation services HUD was to provide and those that would be the owners’ responsibility. NAHMA officials stated that the perception was that the owners would have to bear most of the cost of providing the written translations of vital documents and oral interpretation services free of charge to both applicants and residents. However, these officials noted that HUD had proposed no additional funding to offset these higher costs. Furthermore, NAHMA officials said that expenses for translating documents or providing interpretation services were not accounted for in the OCAF adjustments or included in rent comparability studies. NAHMA officials added that they were concerned because HUD was already holding property owners accountable to the requirements for limited English proficiency as part of fair housing and compliance reviews. These officials stated that holding the owners to these requirements could expose affordable housing owners to unwarranted fair housing complaints and discrimination lawsuits. Also, NAHMA officials stated that adding this regulatory expense without increasing compensation changed the nature of the agreement between HUD and the property owner. Given this extra cost and additional legal liability, owners could be inclined to leave the program, because they would not have to deal with the requirement once they had opted out. Some owners, managers, and industry representatives raised concerns about the clarity of HUD policies and procedures and the way the policies were applied. Of particular concern were the Section 8 Renewal Guide and the REAC physical inspection score. MAHRA established policies for renewing project-based Section 8 contracts, and HUD adopted these regulations in 1998. The rules and procedures were then incorporated in the Section 8 Renewal Guide, which HUD published in 1999. HUD officials noted that they were currently in the process of issuing updates to the Renewal Guide. However, according to a group representing the private owners, only parts of the Renewal Guide had been updated despite many changes to HUD’s policies and procedures, particularly regarding the Mark-to-Market and Mark-Up-to-Market program. Largely as a result of the out-of-date information, the guide can be confusing, particularly to owners that have a few project-based Section 8 units. Property owners and industry representatives cited gray areas in the guide, particularly concerning the Mark-to-Market option. For example, in Baltimore we visited two small nonprofits that owned Section 8 properties. Property managers for both properties faced challenges navigating complex HUD policies that they said the guide did not adequately explain, such as when and under what conditions the owner could choose a different renewal option. While several nonprofit groups offer training on HUD policies for project-based Section 8 properties, a property manager told us they did not have the resources to pay for training on their own. We also were told that a lack of understanding of HUD policies had caused some owners to receive low scores on management reviews that comprised their Section 8 status. HUD officials told us that they had set up a task force to examine the guide and that it was currently being updated. REAC inspections are an integral part of HUD’s efforts to oversee the properties in its inventory of affordable housing. HUD’s physical inspections require that multifamily housing be decent, safe, sanitary, and in good repair. The standards establish specific requirements for the site, the dwelling units, and common areas. HUD has developed a detailed list of items that inspectors are required to review at properties and specifically defines what constitutes a deficiency for each inspected unit. However, some owners, managers, and representatives of multifamily housing industry groups we interviewed had concerns about the reliability, consistency, and fairness of REAC’s inspections. For example, owners and property managers in New York City and Houston indicated that REAC inspectors recorded violations for minor issues that often were outside of the managers’ control. Some of the owners also stated that they were cited for minor violations rather than for cumulative violations and that inspections tended to be arbitrary. For example, HUD’s Chicago field office and a Chicago nonprofit reported that REAC inspectors ignored the deteriorating overall condition of a property because the inspectors were either inexperienced or afraid to enter some of the buildings. Specifically, Chicago’s Lawndale apartments—which had one owner with 1,105 units in 104 buildings spread over a large area in North Lawndale—received passing REAC physical condition scores, although the overall complex was in disrepair. The end result was that Lawndale was to be split up and sold to a number of owners, resulting in about 700 of the 1,105 Section 8 units leaving the project-based Section 8 program. HUD officials told us that because of the enormous size of the Lawndale apartments, the complex was not a typical HUD Section 8 project-based property. They defended the REAC process, stating that the random nature of its inspections could result in passing scores at a large project like Lawndale, which had a mix of substandard and passing units. They believed that what happened at Lawndale was an isolated incident but that such an outlier should have been more carefully monitored by HUD. Most project-based Section 8 property owners opt to renew their contracts with HUD, but the 8 percent of expiring contracts that were not renewed between 2001 through 2005 represent over 50,000 units that are no longer subsidized through the program. Our work identified some recurring program issues and concerns including the rigidity of the one-for-one replacement requirement, difficulties with the OCAF adjustments and other administrative burdens, all of which could affect the program’s positive retention rate as more properties come up for renewal. Based on the views of Section 8 owners and managers we interviewed, HUD’s one-for-one replacement policy has made certain properties vulnerable to exiting the program. Particularly, not allowing owners to reconfigure hard-to-fill efficiency apartments in some markets into fewer one-bedroom units could cause financial difficulty for owners and lead to a decision to opt out of the program. Also by not allowing owners to reduce the number of units in a property because of the desire to have one-for-one replacement, HUD may inadvertently be forcing owners out of the program. Consistent with congressional action that eliminated the one- for-one replacement requirement in HUD’s Public Housing programs, we are encouraged that HUD has started to rethink this policy in light of changing market conditions especially for the elderly, and understand the difficulty HUD faces in balancing the need to preserve affordable housing with the requests of property owners. However, without a more flexible policy, HUD risks losing more properties from the Section 8 program. As more contracts come up for renewal, owners may continue to leave the program if they do not have the flexibility to make changes that the market demands to existing housing stock. HUD’s field offices, which are best situated to understand local market needs, may be in the best position to make these types of property decisions. The OCAF adjustment process, which is required by MAHRA, is another area that may threaten HUD’s preservation efforts. As currently implemented, HUD estimates of costs for items such as utilities and insurance in some cases do not reflect current market conditions, primarily because they are estimated 15-18 months before they take effect and are applied statewide. As a result, property owners in high-cost areas may not receive enough in subsidies to meet their expenses. Moreover, during emergency situations HUD does not have a process to address rapidly changing prices such as spikes in energy costs or rapidly increasing insurance rates in coastal areas. Ultimately, owners divert money from capital improvement projects to cover such operating expenses. These types of issues could result in more owners leaving the program. Given that many property owners emphasized that guaranteed rental income was a primary reason for staying in the program, HUD needs to help ensure that properties are covered for the increases in costs incurred. If HUD does not act quickly to review the OCAF adjustment process, property owners may be forced to leave the Section 8 program due to lack of sufficient funding. Finally, owners, managers, and industry representatives raised a number of other issues that could drive them out of the program. These issues included certain policies and procedures that were described as unclear, inconsistently applied, or administratively burdensome. Specifically, late subsidy payments, higher administrative costs for owners with fewer Section 8 units, confusion about requirements for persons with limited English proficiency, and unclear HUD policies and procedures could contribute to owners opting out of the Section 8 program, taking units that cannot be replaced out of the affordable housing stock. To help ensure that project-based Section 8 preservation efforts meet the needs of a changing housing market, we recommend that the HUD Secretary direct the Deputy Assistant Secretary for Multi-family Housing to: modify the one-for-one replacement requirement to allow for a case-by- case assessment of the merits of permitting owners to reduce the number of project-based Section 8 units or reconfigure the units to better meet market demand and to expand its reconsideration of this policy beyond elderly properties, expeditiously reevaluate its OCAF adjustment process to make sure that the adjustments reflect local variations, are implemented in a more timely manner, and are responsive to emergency situations, and determine if any of the additional issues raised by owners such as policies and procedures that are unclear, inconsistently applied, or administratively burdensome could contribute to owners’ opting out of the Section 8 program and take steps to address these issues. We received comments on a draft of this report from HUD’s Assistant Secretary for Housing—-Federal Housing Commissioner that have been reproduced in appendix II. The Commissioner generally agreed with the report, and noted that it confirmed that HUD was not encouraging property owners to opt out of the project-based Section 8 program but rather was using a variety of tools to encourage continued participation. He also said that the report contained several positive suggestions for improving program delivery, but added that none of the recommendations would likely deter owners seeking to maximize their economic gains in a “hot” real estate market from leaving the program. We agree that most owners that opt out of the project-based Section 8 program do so because of market factors rather than dissatisfaction with HUD’s preservation efforts. However, given the finite supply of project-based Section 8 properties, addressing some of the recurring program issues and concerns we identified could help keep some owners from opting out of the program. The Commissioner also noted that the report lacked data on the number of opt-outs that might have been avoided if the proposed recommendations had been implemented. We agree that such data would have allowed us to determine specific reasons owners opted out of the program, but because HUD does not track properties and the reasons that they leave the program, the data were not readily available. Addressing our recommendation that HUD modify the one-for-one replacement policy to allow for case-by-case assessments of requests to reduce the number of or reconfigure existing units, the Commissioner expressed concern that revising the policy might save one or two projects from opting out but lead to a greater net loss of assisted units. He added, however, that HUD was aware of the need to accommodate market demand and would be evaluating the policy and identifying criteria for approving such requests. We are encouraged that HUD is considering a more flexible policy and continue to support the position that criteria can be developed that balance market demand and the need to preserve affordable housing. Regarding our recommendation that HUD expeditiously reevaluate its OCAF adjustment process, the Commissioner wrote that the department was aware of industry concerns about the use of statewide data, the approximately 18-month lag between the time data is collected and the adjustments go into effect, and the fact that OCAF does not take into account emergency situations. He noted that HUD had initiated a review of the OCAF methodology, including the actual costs to the portfolio resulting from the lag time and the use of statewide data, and planned to complete and announce the results of the review by the end of fiscal year 2007. Concerning our recommendation that HUD determine if any of the additional issues that property owners raised could be contributing to the decision to opt out of the program, the Commissioner said that HUD was aware of the concerns we cited and was always willing to consider recommendations that could reduce administrative costs and encourage owners to stay in the program. For example, he acknowledged that the project-based Section 8 payments were late from time to time but added that the agency was committed to improving the process and would provide updates on its progress to GAO and the Congress. We are sending copies of this report to the Chairman and Ranking Minority Member of the Senate Committee on Appropriations; the Chairman and Ranking Minority Member of the Senate Committee on Banking, Housing and Urban Affairs; the Chairman and Ranking Minority Member of the House Committee on Appropriations; the Chairman and Ranking Minority Member of the House Committee on House Financial Services; the Secretary of HUD; and other interested parties. This report will also be available at no charge on GAO’s Web site http://www.gao.gov. Please contact me at (202) 512-8678 or [email protected] if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To assess the Department of Housing and Urban Development (HUD’s) efforts to maintain Section 8 project-based housing stock and identify any discernable patterns in its preservation efforts, we reviewed the department’s five-year analysis of units terminated and retained by year, state, and locality for the period 2001-2005. HUD’s analysis is contained in a report to Congress, Section 8 Project-Based Contract Renewals, sent to the Senate Appropriations Committee in August, 2006. To facilitate this effort, HUD’s Office of Multifamily Programs and Systems, in June of 2006, provided us a data extract containing information on all Section 8 contract activity for the 5-year period. This extract incorporated and combined data from HUD’s Real Estate Management System (REMS), which reflects historical information on all properties in HUD’s multifamily portfolio; DATAMART, a subset of REMS, which depicts data for all active multifamily properties; and the Tenant Rental Assistance Certification System (TRACS), which illustrates historical activity for all multifamily properties subsidized department’s Real Estate Assessment Center (REAC) database system showing the most recent physical and financial conditions of properties in HUD’s multifamily portfolio. To determine the number of Section 8 project-based units renewed and terminated during the five year period as well as the characteristics and locations of their associated properties, we reviewed, analyzed, and replicated all numbers contained in HUD’s report relating to Section 8 contracts that left or remained in HUD’s portfolio during 2001-2005. By comparing renewals and terminations, we determined the extent to which HUD’s Section 8 project-based housing stock grew or declined. Our analysis also enabled us to observe patterns associated with such actions. Following the same methodology HUD employed in its Report to Congress, we counted individual contract renewals and their associated units only once irrespective of how many times an owner renewed the contract. Moreover, we only considered contracts as renewals if such contracts were active at the end of 2005. In contrast, terminated contracts included all situations where owners opted out of their Section 8 contractual obligations anytime during the 5-year period; mortgage foreclosures; and contracts terminated by HUD due to enforcement actions. We counted contractual terminations as a single event because, by definition, the contract no longer exists. We also used the database extract to analyze characteristics of properties that left or remained in the Section 8 Program that HUD did not address in its report. For instance, we evaluated the types of rental assistance associated with renewals and terminations; occupancy and unit characteristics of properties whose owners elected to renew or opt out of their contractual obligations; and the physical and financial conditions of such properties. In addition, to determine which geographic locations had what number of contract renewals or terminations, and if any evidence of patterns in such locations existed, we obtained census divisions from the Census Bureau website and mapped properties using the divisions. Our analysis enabled us to depict the locations where HUD was losing or gaining Section 8 housing stock at the county level. To ensure that the HUD data were reliable, we performed various electronic tests and checks to determine (1) the extent to which the data were complete and accurate, (2) the reasonableness of the values reflected in the data variables, (3) if any data fields had missing values, and, (4) whether any data limitations existed in the data we relied upon to do our work. In addition, we reviewed existing information about the quality and controls of the data systems and discussed the data we analyzed, as well as the programming code used to manipulate such data, with agency officials to ensure that we interpreted them correctly to do our analysis. Based upon our reliability assessment, we concluded that HUD’s data were sufficiently reliable for purposes of this report. Moreover, our analysis determined that the information reflected in HUD’s report to Congress was accurate and reliable for purposes of ascertaining the extent to which Section 8 contracts and their associated units were terminated or gained during the 5-year period 2001-2005. The data we obtained from HUD were current as of June 15, 2006. To identify the tools and incentives available to HUD to preserve project- based Section 8, we reviewed and summarized legislation and regulations pertaining to Section 8 project-based housing preservation including the Multifamily Assisted Housing Reform and Affordability Act (MAHRA) of 1997 and the Section 8 Renewal Guide. To identify the incentives offered to Section 8 owners, we conducted interviews with HUD headquarters staff in Washington, D.C. and field office staff in Baltimore, Maryland; Chicago, Illinois; New York, New York; Los Angeles, California; Columbus, Ohio; and Houston, Texas. To get additional information about the use of these incentives, we conducted interviews with Section 8 property owners and managers, nonprofit organizations, industry groups, HUD contractors, and state and local government finance agencies. To determine how frequently Section 8 owners used each tool or incentive, we extracted and analyzed data from HUD’s Real Estate Management System (REMS) and spoke with HUD officials and industry groups. REMS includes historical information on all properties in HUD’s multifamily portfolio including data on project-based Section 8 properties and contracts. One Section 8 property may have multiple contracts. To assess the views of for-profit and nonprofit property owners and managers on HUD’s Section 8 housing preservation efforts, we interviewed industry representatives and conducted case studies in five selected locations. We judgmentally selected for review five HUD office locations (two regional offices and three field offices) in which to complete interviews with for-profit and nonprofit property owners and managers. Sites were selected based on the following characteristics: (1) percentage of units that opted out from 2001 through 2005, (2) vacancy rate (3) geographic location, (4) percentage of households with worst-case housing needs, and (5) HUD regional and field office program performance. In the selected case study locations, we conducted interviews with current and former project-based Section 8 for-profit and nonprofit property owners and managers as well as HUD office staff. We also interviewed performance-based contract administrators (PBCA), entities responsible for administering project-based Section 8 contracts, and participating administrative entities (PAE), entities responsible for structuring Mark-to-Market transactions, serving the selected case study locations. For all of our interviews, we used a standardized interview guide to ensure consistency. We gathered information on reasons selected for-profit and nonprofit property owners stayed in or left the project-based Section 8 program and perceptions about the effectiveness of HUD’s tools and incentives to preserve Section 8 housing. We also reviewed relevant documentation provided by property owners and managers, HUD regional and field office staff, PBCAs, and PAEs. We conducted our work between October 2005 and April 2007 in Baltimore, Maryland; New York, New York; Chicago, Illinois; Columbus, Ohio; Los Angeles, California; Houston, Texas; and Washington, D.C., in accordance with generally accepted government auditing standards. In addition to the individual named above, Andy Finkel, Assistant Director; Grace Haskins, Michelle Bracy, Emily Chalmers, Mark Egger, Charlene Johnson, Alison Martin, John McGrail, Marc Molino, and Roberto Piñero made key contributions to this report.
In light of the pressing need for rental housing affordable to low-income households and concerns that the Department of Housing and Urban Development (HUD) may not be committed to maintaining its Section 8 project-based housing stock--a key source of such housing--Congress directed GAO to assess HUD's efforts to preserve its project-based housing and recommend ways to improve these efforts. This report discusses (1) patterns in the volume and characteristics of HUD's Section 8 project-based properties; (2) tools and incentives that are available to encourage property owners to stay in the program; and (3) the views of property owners, managers, and industry representatives on HUD's preservation efforts. To address these issues, GAO analyzed HUD data, reviewed pertinent legislation and regulations, and interviewed HUD officials and industry representatives. GAO identified a number of patterns in the volume, characteristics, and location of HUD's project-based Section 8 housing between 2001 and 2005. During this period owners renewed 92 percent of Section 8 rental assistance contracts and 95 percent of the units covered by these contracts. While relatively few owners left the program voluntarily, most of those we interviewed did so to seek higher rents in the private market or to convert their units into condominiums. The properties most likely to leave the program were those with few Section 8 units, family-occupied units, those in poor physical condition, and those located in markets with rapidly escalating housing values. HUD offers several incentives to keep Section 8 property owners in the program. Owners that used these incentives between 2001 and 2005 most often chose the Mark-to-Market and Mark-up-to-Market programs, both of which adjust rents to conform to prevailing market conditions. Some owners used HUD programs that offered additional financing for property rehabilitation to participants in the Section 236 mortgage reduction program and the Section 202 mortgage program for housing for the low-income elderly and persons with disabilities. HUD officials, owners, and industry representatives told us that many Section 8 owners also opted to use the Low-Income Housing Tax Credit and tax-exempt bonds, both of which the IRS administers through state housing finance agencies. Some property owners, managers, and industry representatives cited concerns with certain HUD policies and practices, especially the one-for-one replacement policy for Section 8 units and the Operating Cost Adjustment Factors (OCAF) payment process. GAO found that the one-for-one replacement policy, which prohibits reductions in the total number of Section 8 units in a property when a contract is renewed, had led some owners to leave the program. Property owners noted that they could not reconfigure their properties to supply larger units that were in higher demand, especially by elderly tenants. Although not required by statute to adopt this policy, HUD did so in order to preserve as many units as possible but is reviewing it in light of the growing concerns. Owners also expressed frustration with the long delay in OCAF adjustments, the use of statewide averages, and the inability of the process to deal with emergency situations. Finally, owners offered several suggestions that may warrant HUD's attention, including improving the Section 8 contract renewal guidance and revisiting physical inspection guidelines.
The military and political changes occurring after the Cold War era have resulted in the need for change in U.S. military forces and the acquisition system that supports them. DOD’s acquisition reform program was established to reduce acquisition costs while maintaining technological superiority. The goal is to move away from buying items made to comply with unique DOD specifications, terms, and conditions and toward buying commercial products or products made using commercial practices. The intent is to further integrate the U.S. defense and commercial industrial bases. DOD’s use of military-unique specifications and standards has been cited in several reports as a major barrier to this acquisition reform goal. In general, “military specifications” describe the physical and or operational characteristics of a product and “military standards” detail the processes and materials to be used to make the product. The standards can also describe how to manage the manufacturing and testing of a part. For example, a specification might describe the kind of wire to be used in an electrical circuit and a standard might describe how the wire is to be fastened in a circuit and what tests should be conducted on the circuit. Military specifications and standards, collectively referred to as “milspecs,” are a major part of DOD’s Standardization Program, which seeks to limit variety in purchased items by stipulating certain design details. Some principal purposes for milspecs have been to (1) ensure interoperability between products, (2) provide products that can perform in extreme conditions, (3) protect against contractor fraud, and (4) promote greater opportunities for competition among contractors. Many studies over the past 20 years have attempted to redirect the milspec system. In general, these studies have recognized that although milspecs are required, DOD’s milspec process was complex, and often rigid, and blocked the use of commercial products and processes. These studies have repeatedly presented a number of the same issues and recommendations. Although DOD has made some progress in decreasing reliance on milspecs, in August 1993, the Deputy Under Secretary of Defense for Acquisition Reform directed that a process action team (PAT) be established to revisit milspec reform. The PAT was to develop (1) a comprehensive plan to ensure that DOD describes its needs in ways that permit maximum reliance on existing commercial items, practices, processes, and capabilities, and (2) an assessment of the impact of the recommended actions on the acquisition process. The April 1994 PAT report entitled Blueprint for Change: Report of the Process Action Team on Military Specifications and Standards is the foundation for DOD’s current milspec reform program. Appendix I lists the 24 recommendations from the report and highlights the 13 recommendations identified as principal ones. On June 23, 1994, DOD published an implementation plan for the reform program. In a June 29, 1994, memorandum, the Secretary of Defense officially accepted the PAT report and directed the services and DOD agencies to take immediate action to implement the recommendations. These three documents—the report, the plan, and the memorandum—are the basis of DOD’s current efforts to reform milspecs. DOD’s current milspec reform program is directed toward reducing government involvement in the detailed management of acquisitions so that appropriate opportunities will be taken to use commercial products and processes. Examples of the program’s direction can be seen by such recommendations as streamlining government oversight and inspection, encouraging contractors to offer alternatives to milspecs, expecting the use of performance-based milspecs, and requiring waivers to use milspecs when no alternative is available. This program is based on essentially the same recommendations contained in earlier reports addressing milspec reform. However, the PAT report goes further than previous efforts, as it includes more details for implementation, and additional steps were taken in June 1994, when DOD issued its implementation plan. The fact that most recommendations in the current program to reform milspecs are not new is not surprising because the PAT primarily relied on prior reports for its analysis. Also, as noted in an earlier study, the milspec area has been analyzed many times and “there is literally nothing new under the sun.” In our review of eight prior milspec and acquisition reform reports issued since 1977 (listed in app. II), we identified similar recommendations for 17 of the 24 recommendations in the PAT report, including 10 of the 13 principal ones. For example, at least six of the prior reports contained recommendations similar to the PAT recommendations for training, developing nongovernment standards, and automating the development of milspecs. Of the seven new recommendations, four were milspec recommendations related to oversight, contractor test and inspection, pollution prevention, and corporate information management for acquisition. The remaining three were not recognized by DOD as milspec issues and were not addressed by the implementation plan or the Secretary’s memorandum. Not only are most of the recommendations not new, but some of the recommended tasks are already stated in DOD or service policy. For example, one major PAT recommendation is to use performance specifications; however, according to DOD and service officials, the preference for performance specifications has existed for several years. In regard to another recommendation, DOD policy already directs adoption of all nongovernment standards currently used in DOD. Furthermore, the DOD Inspector General’s Office, in comments on the PAT draft report, indicated that the services’ or defense agencies’ policies have either encouraged or required actions similar to five of the recommended tasks to eliminate excessive contract requirements. Additionally, some DOD locations had undertaken actions that are comparable to tasks recommended in the PAT report. For example, the Army’s Armament, Munitions, and Chemical Command and its Test and Evaluation Command reported that in a 10-month period, they saved $42 million in test and inspection costs, with most savings resulting from the use of process controls. Process controls were recommended in the PAT report. DOD’s current milspec program addresses many aspects of developing and applying milspecs and identifies tasks that need to be accomplished. This can be attributed, in part, to the fact that the PAT report developed more detailed plans for implementation than most of the prior reports. In addition to identifying tasks for each recommendation, the report identified risks, barriers, possible benefits and disadvantages, resources, timeframes, responsible organizations, and progress indicators associated with the recommendations. For example, one principal recommendation is to establish Standards Improvement Executives that have the authority and resources to implement an improvement program in each service and defense agency. For this recommendation, the report identifies six tasks for implementation, such as appointing the Executives by a specified date and developing a separate budget line item for the funding they control; a risk to successful implementation, the concern that adequate resources might be unavailable; a barrier, the failure of past DOD leadership to demonstrate long-term commitment to the milspecs improvement program; benefits, such as helping foster cultural change, and disadvantages, such as creating another DOD power base; estimated costs of about $269 million for the entire milspecs improvement program over 6 fiscal years starting in 1994; and time frames for the tasks. In June 1994, a Report Implementation Group—consisting of representatives from OSD, the services, and the Defense Logistics Agency—met and developed DOD’s implementation plan. The plan addresses an approach for ensuring that the infrastructure and resources required for reform are in place. A key feature of the plan is that each major buying command and center is required to provide a draft of its own implementation plan to its service/agency by the end of October 1994, with final submittal by the end of November 1994. Additionally, to help ensure stable milspecs improvement funding and provide management oversight, the plan envisions that the Assistant Secretary of Defense (Economic Security) work with the DOD Comptroller to create a common program element for each service’s budget. Some PAT report recommendations were not viewed as directly related to milspec reform and were not addressed by the implementation group. Also, the group did not address some other implementing tasks. For example, the task to establish memorandums of understanding with industry was set aside because the PAT had provided no data on the benefits of this task and the implementation group questioned the value. In another case, a recommended task—canceling or inactivating standards identified by industry as problems—was temporarily suspended by the group pending the completion of an additional analysis. According to OSD officials, the implementation plan is simply the first step in a long-range, iterative process. We were told that the implementation plan reflects current thinking and that the plan is to be updated periodically to reflect progress, issues, and new directions. Officials said that in 6 months the group will revisit the plan and update it. The major focus of the current milspec reform program is on changing DOD’s acquisition culture. Specifically, the PAT’s recommendations and implementing tasks, the subsequent implementation plan, and the Secretary of Defense’s memorandum all address the need to change DOD’s acquisition culture. We previously reported that the inability to change the culture has thwarted reform. The PAT report goes beyond identifying the need for cultural changes and addresses several elements in a cultural change program, including (1) leadership, (2) training, (3) resources, and (4) incentives for desired behavior. In a February 1992 report, we stated that such elements, especially top management commitment and training, have been successfully used in the private sector to change organizational culture. However, we also noted that experts believe that a culture change is a long-term effort that takes at least 5 to 10 years to complete. DOD officials and prior studies have stated that past milspec reform initiatives were not fully successful because top management did not participate personally in the process and provide the required leadership. For example, in an overview of prior milspec initiatives, a 1993 report stated that personal involvement of DOD management has worked, and hands-off, directive-type management has not. The Secretary of Defense, in signing the memorandum to implement the reform program, stated that the current senior leadership is committed to ensuring that acquisition reform changes will be accepted and institutionalized. DOD officials said that this is the first time that such support has existed prior to beginning a milspec reform effort. The PAT report and the Secretary’s memorandum stipulate that OSD management and other acquisition leaders must take an ongoing and proactive role in reinforcing the acquisition reform message of which milspecs is only one component. According to the PAT report, senior DOD management has a major role in establishing the environment essential for cultural change by, among other things, participating in the implementation process. Leadership is also required to ensure that top-level officials designated to carry out the reforms have the authority and resources to implement the program. For example, some of the prior reports have noted that the problem is not in assigning reform responsibilities to designated officials, but in ensuring that these officials have the required authority and resources. The most likely candidate to carry forward a reform agenda—the Standards Improvement Executive—has often been removed from the acquisition decision-making process. As described earlier, the PAT recommends giving these Executives the authority needed to effect desired reform. As required in the implementation plan and the Secretary’s memorandum, Standards Improvement Executives were appointed in July 1994 to participate in the Defense Standards Improvement Council. The Council is to oversee the implementation of, provide direction to, and resolve issues in the milspec reform program. Among other things, the Secretary’s memorandum required the Council to report directly to the Assistant Secretary for Economic Security and directed that actions be taken to budget funds for the program. However, whether such changes will give these officials the authority and resources needed for milspec reform is yet to be determined. The PAT report cites training as “the linchpin of cultural change, providing new skills and knowledge to implement a new acquisition paradigm.” The majority of the report’s recommendations either included tasks to provide training or cited the need for training to overcome cultural barriers. While training has been recommended in most prior milspec reform reports, the current emphasis on training appears more extensive and is intended to include more personnel in training programs. Past training recommendations primarily addressed classroom training. The current recommendations require continuous, rather than one-time training, for all levels and includes many delivery systems in addition to classroom training to reach the personnel responsible for implementation. Examples include such media as video tapes of speeches and interviews by top OSD and service leaders, video conferences, correspondence courses, computer-based instruction, and road shows (in which senior acquisition personnel go on-site to the workforce to sell the need for changes and answer questions). While some of the training is focused on demonstrating the need for change, other training is to provide instruction on specific skills and capabilities such as developing and applying performance specifications, conducting market research, or obtaining quality assurance with reduced government oversight. The PAT report estimated training costs at about $13 million over 6 years, starting in 1994. This was to be in addition to training already funded within existing budgets for the Defense Acquisition University. We were told that (1) the amounts in the report are estimates and are not based on detailed analysis and (2) the services are developing details for budget submissions. The implementation plan does not add substantive details on training to the PAT report. However, one possibly significant difference between the two is that the implementation plan does not require that training related to milspec reform be a mandatory part of career progression for all appropriate acquisition personnel as the PAT recommended. This could serve to decrease some of the importance of new training. The PAT and prior efforts have stated that personnel and funding are crucial resources to the success of the recommended actions. The PAT reports that one way of ensuring reform is to develop a joint milspec budget with individual service/agency line items to control funds needed for implementing initiatives. Four of the eight prior reports we analyzed also recognized the need for separate funding to accomplish milspec recommendations. Currently, the funding and personnel responsible for developing and maintaining milspecs used by DOD are decentralized with OSD providing overall policy and guidance. As a result, local commanders where standardization activities are located control the resources and can reduce standardization efforts to free funds and personnel for other tasks considered more important. In our field visits we noted examples of reductions in resources for milspec functions because of other work priorities. We were told that the personnel situation could intensify as the DOD acquisition workforce continues to shrink. Reportedly, the workforce has been reduced by 23 percent, or 134,000 jobs, since 1988. The PAT report estimated that total additional funding required to implement the recommendations would be as shown in table 1. PAT officials told us that the implementation estimates were very rough, and they could not provide support for them. The Secretary of Defense’s implementing memorandum does not address the amount of funds that might be required. It requires the Under Secretary of Defense (Acquisition and Technology) to arrange for funds needed in fiscal years 1994 and 1995 to efficiently implement the PAT report and directs the services to program funding for fiscal year 1996 and beyond. DOD and service officials told us that providing funds to carry out recommendations or ensuring that funds will be available for milspec functions will be difficult. As noted in earlier reports, lack of adequate funding was a problem in other milspec reform efforts. Furthermore, because of reductions in the DOD acquisition workforce, personnel authorizations could become as critical, if not more critical to milspec reform as funding. For example, the implementation plan pointed out that the Air Force, even with adequate funds, might have difficulty implementing the recommendations due to personnel ceilings. All DOD organizations might experience such difficulty because DOD is implementing the Federal Workforce Restructuring Act of 1994 by establishing work year ceilings on civilian personnel levels. One way the program recommends achieving cultural change is to provide incentives for industry and program officials to effectively introduce alternatives in the proposal process as revisions or substitutes for milspecs. Our previously discussed December 1992 report noted that one reason reforms do not occur is that the basic incentives or pressures that drive the participants’ behavior in the process are not changed. Accordingly, changing incentives and pressures is important for cultural change as opposed to coercive and procedural solutions that attempt to make things happen without necessarily affecting why they did not happen in the first place. The PAT recommends that all new high-dollar value solicitations and ongoing contracts include a statement encouraging contractors to submit alternative solutions to milspecs. Tasks proposed to implement the recommendation include policy changes to allow contractors offering alternatives to milspecs the possibility of additional profit or fees for new contracts and the negotiation of a no-cost settlement for certain existing contracts. A similar recommendation was in the 1977 Defense Science Board report; however, a 1993 Defense Science Board report pointed out that currently “Government profit ‘guidelines’ do not encourage contractors to reduce costs since profit is a percentage of cost.” Also, some DOD officials have questioned whether this recommendation provides more incentives than the current program. Accordingly, questions remain as to whether this recommended action will adequately incentivize contractors. In addition to providing incentives to contractors, DOD’s program envisions providing incentives to program managers. One of the recommended tasks is to issue a change in policy that encourages program managers to select alternative solutions to milspecs by allowing the program to retain a portion of any resulting savings. This was recommended in a 1987 study, but was not implemented. Our review identified program areas that have not been fully developed in this early stage of implementation. Specifically, we observed that (1) data on the benefits of implementing the recommended actions were generally not available, (2) opportunities for advancing acquisition reform goals had not been prioritized, and (3) indicators were not adequate to measure progress toward intended goals. DOD officials acknowledged the need for further work in these areas as implementation proceeds. The PAT report and other reports assert that milspec reform will result in dollar savings and other benefits that will more than offset the additional funds required for reforms. However, neither the PAT report, the implementation plan, nor the Secretary’s memorandum provide much supporting data on dollar savings or other benefits to be achieved. The PAT’s charter specifically required the team to quantify the benefits of recommendations. Although 14 of the 24 recommendations refer to expected savings or cost avoidances, the report provided specific dollar benefits for only 2, and these were the savings realized from limited implementation by a service or defense agency. OSD and service officials acknowledged that the PAT did not do much to quantify benefits. These officials stated that it was difficult to identify costs and savings of the various actions involved in each recommendation but conceded that this information should be developed. DOD officials said that because many interrelated actions are being implemented in addition to milspec recommendations, it is not possible to identify the results of specific changes. The July 1993 Defense Science Board report on Defense Acquisition Reform also supports this view. It cites case studies to show potential savings by eliminating five elements that impose inefficiencies in the current acquisition systems—unique government specifications, processes, and practices being one element. However, the examples indicated that the five elements combined to cause the additional costs of government items, and the savings from each recommended change were not subject to precise calculation. During his press conference on milspec reform, the Secretary of Defense stated that milspec reform was expected to increase DOD’s costs in the first year but to produce billions of dollars in savings thereafter. He cited the electronics area as having the potential to produce savings of about $700 million. DOD officials have not identified any reliable data on costs and savings that support these statements. Identifying monetary savings could be critical to achieving acceptance of the reform program by officials throughout the acquisition community. Prior efforts, such as the Defense Management Review Working Group Initiative, reportedly failed because, among other things, the services and defense agencies never concurred with the initiative. A DOD official, in commenting on the draft PAT report, said that it would be helpful if the report included some form of cost benefit analysis. More details of monetary benefits might be required if milspec reform is to be successful because officials could be reluctant to commit scarce resources if they are not convinced that the effort will produce identifiable benefits. Under its current milspec reform program, DOD has not prioritized actions by identifying where the greatest needs and opportunities for milspec reform exist. Neither has it clearly differentiated the types of acquisitions, classes of equipment, or sectors of the industrial base to which each recommendation has the greatest applicability. The PAT charter tasked the team to evaluate the impact of implementing its recommendations on major systems, less-than-major systems, systems support equipment, spare and repair parts, base support equipment, and supplies and consumables. Although the team addressed some of these areas, an overall evaluation of the impact of the PAT recommendations on different types of acquisitions, buys, or industrial sectors was not done. A more detailed evaluation would have been instrumental in identifying where the greatest needs and opportunities for milspec reform exist. Comments received on a draft of the PAT report indicate concern about the global nature of some of the recommendations. For example, one official noted that the report proposes to apply a “grab bag” of practices to each and every program without considering the specific needs of each program. The official said that this approach would harm the general acquisition process. The PAT response did not directly address these concerns but stated that, among other things, the PAT recognized that the defense acquisition process was very complex and that simple solutions broadly applied are not the answer. If all needed resources do not become available, focusing on areas of high payoff might be needed. The limited examples of identified benefits appear to indicate that the recommendations could meet varying levels of needs or provide different benefits, depending on the industrial sector involved. For example, Defense Science Board reports issued in January 1987 and July 1993 identify key industrial sectors, such as electronics, jet engines, semiconductors, and transportation, as offering opportunities for DOD to buy commercial products without using milspecs. The Secretary of Defense stated that in the electronics area industry was so far ahead of DOD technologically that using performance or commercial specifications for these items would produce great benefits. DOD’s implementation plan does not target this or other areas for priority attention. Identifying areas where the greatest opportunities are and establishing the details on how DOD could apply recommendations to different types of buys could be important in ensuring that implementing officials clearly understand what is required and what benefits are expected. DOD officials told us that they are developing tools for the services to use in identifying the greatest opportunities. They said that these tools include a questionnaire to help users prioritize the areas of greatest opportunities for milspec reform within the various DOD activities. Also, they said that DOD is establishing priorities for eliminating management and process standards that have been tentatively identified by industries as significant integration barriers or cost drivers. DOD’s milspec program calls for establishing indicators that monitor the program’s success in translating the reform policy into actions and reducing costs and integrating the defense and commercial bases. DOD’s implementation plan identifies 12 indicators, a reduction from the approximately 50 individual ones listed in the PAT report. In addition, the plan states that an existing database will be expanded to have automated data reporting for some indicators. However, DOD officials said that the expanded data was not viewed as cost effective, and currently, they plan to expand data in only one limited area. An earlier milspec reform report noted that the current DOD computer systems are not able to track some critical data elements such as the volume of commercial items being bought, or the number of items bought to milspecs as opposed to some other type of specification. The majority of the indicators in the PAT report and the implementation plan consist of determining whether an event has occurred or counting the number or percent of selected documents, such as milspecs or commercial standards, that are used. For example, on the recommendation regarding leadership, the PAT report indicators include ascertaining if (1) the policy memorandum is issued, (2) video conferences occur, and (3) progress reports are submitted. Indicators for other recommendations include the number of (1) milspecs and commercial type documents used, (2) commercial acquisitions, and (3) alternatives to milspecs proposed and accepted. These do not appear to measure whether DOD is progressing toward its overall goals of reducing acquisition cost and time and integrating the industrial bases. OSD officials recognize that the indicators are weak and are currently working on developing better ones. Although the PAT report recommended that the Defense Standards Improvement Council monitor progress, no other organization has yet been assigned specific responsibility for developing improved indicators. We reviewed the April 1994 PAT report; the DOD’s June 23, 1994, implementation plan; and the June 29, 1994, Secretary of Defense memorandum directing implementation of the PAT report. We analyzed the 24 recommendations in the PAT report, focusing on the 13 principal ones. To see whether these recommendations were cited in past studies, and whether resources, time frames, and progress indicators were more fully addressed in the current program, we compared the program with selected prior reports on milspecs and acquisition reform. To obtain more data about milspec issues and changes that could occur under the reform program, we (1) visited standardization activities and program offices at the Air Force Material Command and Aeronautical Systems Center, the Army Materiel Command and Aviation and Troop Command, and the Defense General Supply Center and (2) interviewed officials from the services, standards writing organizations, and industries. We conducted our work between November 1993 and August 1994 in accordance with generally accepted government auditing standards. We did not obtain written DOD comments on a draft of this report; however, we discussed our results with agency officials. In general, they concurred with our results and made some suggestions that have been considered in preparing this report. We are sending copies of this report to the Secretary of Defense, the Deputy Under Secretary of Defense for Acquisition Reform, and interested congressional committees. Please contact me at (202) 512-4587 if you have any questions concerning this report. Major contributors to this report are listed in appendix III. The following are the recommendations in the report entitled Blueprint for Change: Report of the Process Action Team on Military Specifications and Standards, dated April 1994. We identify the 13 principal recommendations with an asterisk (*). 1.* All ACAT Programs for new systems, major modifications, technology generation changes, nondevelopmental items, and commercial items shall state needs in terms of performance specifications. 2.* Direct that manufacturing and management standards be canceled or converted to performance or nongovernment standards. 3.* Direct that all new high value solicitations and ongoing contracts will have a statement encouraging contractors to submit alternative solutions to military specifications and standards. 4.* Prohibit the use of military specifications and standards for all ACAT programs except when authorized by the Service Acquisition Executives or designees. 5. Change current processes and procedures to ensure that specifications and standards only list references essential to establishing technical requirements. 6. Eliminate the current process of contractually imposing hidden requirements through references listed in equipment/product specifications or noted on engineering drawings. 7. Mandate cancellation or inactivation of new design obsolete specifications and standards that have had no procurement history for the past 5 years. Cancel all unnecessary data item descriptions. 8.* Form partnerships with industry associations to develop nongovernment standards for the replacement of military standards where practical. 9. Establish a process to include industry and government users upfront in the specifications and standards development and validation processes. 10. Assign specifications and standards preparation responsibility to the Defense Logistics Agency for Federal Supply Classes that are primarily commercial. 11.* Direct government oversight be reduced by substituting process control and nongovernment standards in place of development/production testing and inspection and military unique quality assurance systems. 12.* Direct a goal of reducing the cost of contractor-conducted development and production test and inspection by using simulation, environmental testing, dual-use test facilities, process controls, metrics, and continuous process improvement. 13.* Assign Corporate Information Management offices for specifications and standards preparation and use. 14.* Direct use of automation to improve the processes associated with the development and application of specifications and standards and Data Item Descriptions. 15.* Direct the application of automated aids in acquisition. 16. Use Distributed Interactive Simulations, Design to Cost and Cooperative Research and Development Agreements to achieve aggressive cost/performance trade-offs and dual-use capabilities. 17. Direct the establishment and execution of an aggressive program to eliminate or reduce and identify the quantities of toxic pollutants procured or generated through the use of specifications and standards. 18.* Direct revision of the training and education programs to incorporate specifications and standards reform. Contractor participation in this training effort shall be invited and encouraged. 19.* Senior DOD management take a major role in establishing the environment essential for acquisition reform cultural change. 20.* Formalize the responsibility and authority of the Standards Improvement Executives, provide the authority and resources necessary to implement the standards improvement program within their service/agency, and assign a senior official with specifications and standards oversight and policy authority. 21. Use innovative approaches in the acquisition of weapon systems, components, and replenishment items by using commercial practices. 22. Increase the use of “partnering” in contracts and program management to improve relationships and communication between government and industry. 23. Continue to encourage and assist contractors to use activity-based costing in circumstances where the method could improve cost allocations, bidding, and cost reimbursements. 24. Integrated Product Development will be the preferred risk mitigation tool for all developmental acquisitions. Road Map for Milspec Reform: Integrating Commercial and Military Manufacturing, Report of the Working Group on Military Specifications and Standards, Center for Strategic and International Studies, 1993. Acquisition Streamlining: Specifications and Standards, DOD Inspector General, Report 92-INS-12, September 21, 1992. Report of the Process Action Team on Procedures for Working Group 9 on Specifications and Standards Under the Regulatory Relief Task Force of the Defense Management Review, August 1990. Report of the Process Action Team on User Feedback for Working Group 9 on Specifications and Standards Under the Regulatory Relief Task Force of the Defense Management Review, October 1990. Enhancing Defense Standardization-Specifications and Standards: Cornerstones of Quality, Report to the Secretary of Defense by the Under Secretary of Defense (Acquisition), (the Costello report), November 1988. Use of Commercial Components in Military Equipment: Final Report of the Defense Science Board, 1986 Summer Study, January 1987. A Quest for Excellence: Report to the President by the President’s Blue Ribbon Commission on Defense Management (the Packard Commission report), June 1986. Report of the Task Force on Specifications and Standards, Defense Science Board (the Shea Report), April 1977. Lillian I. Slodkowski The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Department of Defense's (DOD) efforts to implement acquisition reforms, focusing on whether its current program: (1) advances military specifications and standards reform; and (2) gives adequate attention to key issues and concerns. GAO noted that: (1) DOD's current milspec reform program builds on previous studies; (2) although many of the recommendations are essentially the same as those in earlier reports, the current program goes further than previous efforts because it includes more details for implementation; (3) while the implementation strategy is still being refined, officials in the Office of the Secretary of Defense stated that the June 1994 implementation plan is the first step in a long-range, iterative process; (4) major buying commands and centers are to present plans by November 1994 that should provide further implementation details; (5) the current milspec reform effort focuses on changing the acquisition culture and contains several actions intended to accomplish this change, including: (a) ensuring long-term, top-management support; (b) providing training to the affected workforce; (c) securing adequate funding and personnel resources; and (d) establishing incentives for desired behavior; (6) these actions have been used successfully by some commercial companies to promote cultural change; (7) to achieve the major cultural change desired, DOD will need acceptance and support of the milspec reform program throughout the military acquisition community, including both DOD's and contractors' offices; (8) achieving this acceptance and support could become more difficult without: (a) improved data on the benefits of implementing the recommended actions; (b) better focus on areas with the greatest opportunities for benefits; and (c) adequate indicators, referred to by DOD as metrics, to measure progress toward intended goals; and (9) DOD officials have acknowledged difficulties in these areas and indicated that actions would be taken to address these shortcomings as program implementation continues.
Point source discharges are those that emanate from discrete conveyances such as pipes or man-made ditches. section 404, which prohibits the discharge of dredged or fill material into “waters of the United States” without a permit from the Corps. EPA has primary responsibility for carrying out the act, including final administrative responsibility for interpreting “waters of the United States,” a term that governs the scope of all other programs under the Clean Water Act. EPA and Corps regulations define “waters of the United States” for which a section 404 permit must be obtained to include, among other things, (1) interstate waters; (2) waters that are or could be used in interstate commerce; (3) waters, such as wetlands, whose use or degradation could affect interstate commerce; (4) tributaries of these waters; and (5) wetlands adjacent to these waters, other than waters that are themselves wetlands. In addition to the Clean Water Act, some state and local governments have developed programs to protect waters, including wetlands, either under state statutes or local ordinances or by assuming responsibility for section 404 permitting responsibilities. EPA established, in consultation with the Corps, the substantive environmental protection standards that project proponents must meet to obtain a permit for discharging dredged or fill material into “waters of the United States,” while the Corps administers the permitting responsibilities of the program. The day-to-day responsibilities for implementing the section 404 program have been delegated to 38 Corps district offices, with the Corps’ divisions and headquarters providing oversight of the program. In fiscal year 2005, the Corps’ regulatory program budget was $144 million—a 2 percent increase over its fiscal year 2004 funding level. The districts processed about 86,000 permits in fiscal year 2003. Figure 1 shows the locations of 5 of the 8 Corps divisions and 38 districts that we contacted as part of our review. These include the Chicago, Galveston, Jacksonville, Omaha, and St. Paul districts. The first step in the regulatory process is to determine whether there is any water or wetland on the project site and, if so, whether the water or wetland is a “water of the United States.” The Corps determines if the water or wetland is a “water of the United States” and, thus, whether it has jurisdiction, by documenting any connections of the water or wetland on the site to any downstream navigable water or interstate commerce, or by determining if the wetland is adjacent to these waters. If the Corps determines that a water or wetland is jurisdictional but a project proponent disagrees, the proponent can file an administrative appeal challenging the Corps’ determination. Appeals review officers, located at Corps divisions, are responsible for reviewing the administrative records for approved jurisdictional determinations and determining if the appeals have merit. Project proponents may also subsequently file legal actions in federal court if they disagree with the Corps’ final decision on an appeal. Figure 2 shows the Corps’ decision-making process for a jurisdictional determination. If the waters or wetlands are found to be jurisdictional, project proponents who want to discharge dredged or fill material into waters or wetlands as part of development activities on the property may be required to submit an application to obtain a 404 permit. In evaluating permit applications, the Corps requires the project proponent to take actions to avoid, minimize, and compensate for the potential impact of destroying or degrading “waters of the United States.” Under guidelines issued by EPA, the Corps may not authorize a discharge of dredged or fill material if there is a practicable alternative that would have less significant adverse environmental consequences. According to the Corps, under this regulation, it can only authorize the least environmentally damaging, practicable alternative. The preamble also addressed (1) waters that are or would be used as habitat for endangered species, and (2) waters used to irrigate crops sold in interstate commerce. EPA made a similar interpretation in preamble language in 1988. 53 Fed. Reg. 20765 (June 6, 1988). Corps’ jurisdiction. According to the Chief of the Regulatory Branch, certain categories of waters or wetlands may be more at risk for a determination of no jurisdiction as a result of SWANCC. These potentially geographically isolated waters include prairie potholes, playa lakes, and vernal pools. (See fig. 3.) The extent to which the reasoning in SWANCC applies to waters other than those specifically at issue in that case has been the subject of considerable debate in the courts and among the public. Some groups have argued that SWANCC precludes the Corps from regulating virtually all isolated, intrastate, nonnavigable waters, as well as nonnavigable tributaries to navigable waters, while others have argued that it merely prohibits the regulation of isolated, intrastate, nonnavigable waters and wetlands solely on the basis of their use as habitat by migratory birds. In January 2003, the Corps and EPA issued a joint memorandum to clarify the impacts of the SWANCC ruling on federal jurisdiction over waters and wetlands. The guidance called for Corps and EPA field staff to continue to assert jurisdiction over traditional navigable waters, their tributaries, and adjacent wetlands. It also directed field staff to make jurisdictional determinations on a case-by-case basis, considering the guidance in the memorandum, applicable regulations, and any relevant court decisions. It also noted that in light of SWANCC, it is uncertain whether there remains any basis for jurisdiction over any isolated, intrastate, nonnavigable waters. While the SWANCC ruling specifically addressed the use of migratory birds as a basis for asserting jurisdiction over these waters, it did not address other bases cited in Corps regulations as examples for asserting jurisdiction. These bases include intrastate waters whose use, degradation, or destruction could affect interstate commerce, including waters (1) that are or could be used by interstate or foreign travelers for recreational or other purposes, (2) from which fish or shellfish are or could be taken and sold in interstate or foreign commerce, or (3) that are used or could be used for industrial purposes by industries in interstate commerce. Because of this uncertainty, the memorandum instructed the field staff to seek formal project-specific headquarters approval prior to asserting jurisdiction over such waters based solely on links to interstate commerce. While EPA and Corps regulations provide a framework for determining which waters are within federal jurisdiction, they leave room for judgment and interpretation by the Corps districts when considering jurisdiction over, for example, adjacent wetlands, tributaries, and ditches and other man-made conveyances. Before SWANCC, the Corps generally did not have to be concerned with such factors as adjacency, tributaries, and other aspects of connection with an interstate or navigable water body if the wetland or water body qualified as a jurisdictional water on the basis of its use as habitat by migratory birds. In our February 2004 report, we found that Corps districts and staff interpreted and applied federal regulations differently when determining what wetlands and other waters fall within federal jurisdiction. For example, districts differ in their use of proximity as a factor in making determinations. One district required that the isolated water be within 200 feet of other “waters of the United States”; another required a distance of 500; and still others had no minimum requirement. We concluded that it was unclear whether or to what degree these variations would result in different jurisdictional determinations in similar situations, in part, because Corps staff consider many factors when making these determinations. In addition, few Corps districts make public the documentation that specifies the interpretation and application of the regulations they used to determine whether a water or wetland is jurisdictional. Consequently, project proponents may not clearly understand their responsibilities under section 404 of the Clean Water Act. We recommended, among other things, that the Corps survey district offices to determine how they are interpreting and applying the regulations and evaluate if differences need to be resolved. In response, the Corps conducted a preliminary survey in 2004 and a more detailed survey in 2005. As of July 2005, the Corps was in the process of evaluating the districts’ responses to the 2005 survey. Each of the five Corps districts we visited generally used a similar process and similar data sources for making jurisdictional determinations. The districts use a four-step process that consists of (1) receiving a request for a jurisdictional determination or a permit application; (2) reviewing the submitted information for completeness; (3) requesting additional data from the project proponent, as necessary; and (4) analyzing the data to determine if the waters or wetlands are regulated under the Clean Water Act. Corps districts also used similar data to make these determinations, which frequently included topographic, soil, and wetland inventory maps as well as photographs. These data show, among other things, where the proposed project is located and whether there appears to be a basis, such as whether the site’s elevations would allow water on the site to flow into “waters of the United States,” for a water to be regulated. The Corps generally conducts site visits when these data do not sufficiently demonstrate the nature and extent of any connection between an on-site water to a “water of the United States.” According to Corps project managers, a number of factors influence the types and amounts of data they review, such as the size and value of resources at risk and their confidence in the capability and integrity of any consultants the project proponents have hired to prepare their permit applications. In making jurisdictional determinations, project managers in each of the five districts we visited proceed through the following four steps: Receiving a request for a jurisdictional determination or a permit application. The request is submitted by a project proponent, who may be a property owner or the owner’s authorized agent, such as a consultant, or a developer. At a minimum, the request must clearly identify the property and the boundaries of the project site—either with a site location map or with another map that defines the project boundaries—as well as the name of the project proponent, a person to contact, and permission to go onto the project site in the event that a site visit is to be conducted. Reviewing the submitted information for completeness. The project manager assigned to the project reviews the information to ensure that the request is signed by the project proponent and that it contains the minimum required information. The project manager also reviews the information to ensure that it is sufficient to locate the property. The amount and type of information the Corps requests that the project proponent submit may vary by type of applicant and project as well as the extent and functional values of the water resources that may be impacted. For example, residential homeowners who are requesting a determination for their home sites are generally not expected to submit more than the minimum amount of information. In contrast, the districts may request much more detailed information from consultants who are preparing jurisdictional requests or permit applications for commercial property owners. For example, the Jacksonville District recommends that requests be accompanied by aerial photographs; a legible survey, plat drawing, or other parcel plan showing the dimensions of the property; and a list of other maps that provide additional information about the project site such as the types of soils at the site. Requesting additional data from the project proponent, as necessary. If project managers find that information submitted does not sufficiently identify the property or the nature of the project, they will informally or formally request additional information. The Corps will not proceed with a jurisdictional determination until it has received all requested information. Analyzing the data to determine if the Corps has jurisdiction. Once the requested information has been received, the project manager will analyze the data to determine if the waters or wetlands on the project site are connected to any downstream navigable waters that could be or are used for interstate commerce, or adjacent to such waters. If the Corps has jurisdiction, it defines the limits of federal jurisdiction by, for example, identifying high tide lines or ordinary high water marks. If the waters include wetlands, the project manager must also identify the boundaries of the wetlands—that is, conduct what is known as a wetland delineation. Project managers in the five districts we visited generally use similar data sources to make their jurisdictional determinations. The most commonly used data include the following: Topographic maps. Topographic maps show the shape of the Earth’s surface through contour lines, which are imaginary lines that join points of equal elevation on land. Such contours make it possible to measure the height of hills and mountains and the depth of swales and valleys. Widely spaced contours or an absence of contours means that the ground slope is relatively level. Contours that are very close together represent steep slopes. It is often possible to use contours to determine the direction of water flow, and potential connections to other waters. Topographic maps also show symbols representing features such as roads, railroads, streets, buildings, lakes, streams, irrigation ditches, and vegetation. In the five districts we reviewed, 590 of the 770 jurisdictional determination request or permit application files where the Corps’ project managers determined there was no federal jurisdiction included a topographic map. This ranged from a low of 64 percent of the Jacksonville District’s files (89 of 140 files) to a high of 89 percent of both the Galveston District’s (58 of 65) and the St. Paul District’s (140 of 158) files. (App. II contains district-specific information on, among other things, the number of files that contained different types of data.) Figure 4 shows topographic maps used to identify a project location as well as the detailed surface contours of the project site. Soil survey maps. A soil survey map shows the types or properties of soil on a project site. There are over 20,000 different kinds of soil in the United States and they differ depending on how, where, and when they were formed. Soil is altered by the interactions of climate, surface contours, and living organisms over time and has many properties that fluctuate with the seasons. For example, it may be alternately cold and warm or dry and moist. Similarly, the amount of organic matter will fluctuate over time. Such maps can help indicate whether waters or wetlands on a project site have any hydrologic relationship or connection. In the five districts we reviewed, 404 of the 770 files included a soil survey map. This ranged from a low of 17 percent of the Omaha District’s files (43 of 257) to a high of 82 percent of the Chicago District’s files (123 of 150). Figure 5 shows a soil survey map superimposed onto an aerial photograph. The project location is the same as in figure 4. National Wetlands Inventory maps. A wetlands inventory map indicates the potential and approximate location of waters or wetlands as well as wetland types. Most of these maps were produced using aerial photography from the 1980s. The maps also classify the wetlands by type, such as a forested wetland or a scrub and shrub wetland. In the five districts we reviewed, 401 of the 770 files included a wetlands inventory map. This ranged from a low of 11 percent of the Jacksonville District’s files (15 of 140) to a high of 90 percent of the Chicago District’s files (135 of 150). Figure 6 shows a wetlands inventory map superimposed onto an aerial photograph. Photographs. The Corps can use aerial and ground photographs to determine if waters or wetlands are located on a project site and to identify other structures on the site that may provide pathways for water to travel from one water body to another. Such photographs are available from a number of sources, including the project proponents. In addition, aerial photographs are available from the Department of Agriculture’s Natural Resources Conservation Service showing wetlands on private farms that, in return for federal subsidies, have been preserved instead of being turned into cropland. In the five districts we reviewed, 562 of the 770 files included aerial photographs. This ranged from a low of 44 percent of the Omaha District’s files (112 of 257) to a high of 91 percent of both the Chicago District’s (137 of 150) and the Galveston District’s (59 of 65) files. Similarly, 320 of the 770 files included ground photographs. This ranged from a low of 26 percent of the Jacksonville District’s files (36 of 140) to a high of 63 percent of the Chicago District’s files (95 of 150). The Corps uses these maps and photographs not only to provide unique information about the site but also to corroborate information about a site. For example, the Corps can compare National Wetlands Inventory maps with topographic maps to help confirm whether there are waters or wetlands on a project site. The National Wetlands Inventory map could also alert the Corps to the types of waters or wetlands on the site. If the land has been used for growing crops, the Corps can obtain Natural Resources Conservation Service aerial photographs to determine if that agency has verified the existence of wetlands on that particular site. This information can then be used in examining aerial or site photographs provided by the project proponent. Currently, project managers can use online resources for much of the data they need to make jurisdictional determinations. For example, many topographic maps and aerial photographs are available through online sources. In addition, project managers in all of the districts we visited can retrieve more sophisticated versions of aerial photographs, such as color- infrared photographs and digital orthophoto quadrangles, which are computer-generated images of aerial photographs that have been enhanced to better view the ground. Similarly, project managers in all five districts have the ability to superimpose different maps, such as soil survey maps, onto aerial photographs. In some cases, they can produce one map that shows the topography, wetlands, and soils present on a property. According to several project managers we contacted, this ability provides them with a more comprehensive view of the status of waters or wetlands at individual project sites. As can be seen in the following examples, some districts may also use other data sources that are specific to their district in making jurisdictional determinations. The Galveston District relies on maps that designate flood-prone areas—areas that are likely to be flooded. These maps, produced by the Federal Emergency Management Agency, are used for insurance purposes. According to the Galveston District’s policy, if a water or wetland is in an area designated by the agency as a flood zone, the water or wetland will generally be considered adjacent and fall within the Corps’ jurisdiction. The St. Paul District relies on the Southeastern Wisconsin Regional Planning Commission as a resource for maps for seven counties, which include the city of Milwaukee. The commission prepares maps for a variety of purposes, such as transportation planning. The maps include topographic maps as well as existing land-use maps, some of which identify waters and wetlands. Its digital land-use inventory is updated every 5 years. In addition, the state of Wisconsin compiles its own wetland inventory maps and, as a result, Corps project managers may rely less on National Wetlands Inventory maps when determining jurisdiction. Similarly, the state of Minnesota has developed public waters inventory maps that Corps project managers can access. In the Chicago District, which encompasses six counties, project managers can rely on more detailed wetland identification maps that some of the counties have prepared with funding received from EPA as part of its Advance Identification of Disposal Areas program. According to project managers, the number of data sources and the specific data they use to make a jurisdictional determination can vary, depending on the nature of the data and the project site. For example, according to one project manager, if the project site is a 5-acre flat piece of property that contains a one-quarter-acre wetland, and the nearest tributary to a “water of the United States” is 5 miles away, the project manager would not necessarily decide to visit the site to make a determination that the wetland was not jurisdictional. In contrast, according to this project manager, a 1,000-acre site that has 25 different waters and wetlands totaling 200 acres and a series of ditches, and is near a tributary to a “water of the United States,” could warrant several site visits. The use of a consultant to prepare a jurisdictional determination request or a permit application can also affect the Corps’ decision on what data to review. Each district maintains a list of consultants whom residential homeowners and developers can use, although the Corps does not advocate or recommend specific consultants or require that only those consultants on its lists be used. As a result, the list can contain a number of consultants with varying levels of technical expertise. According to several project managers, if they have extensive experience with a particular consultant and trust that consultant’s work, they are more likely to limit their review to the data submitted with the request, including any data on the types of soils, plants, and hydrology the consultant may have collected for use in delineating wetlands, along with questioning the consultant rather than independently verifying the information with their own data sources. In the five districts we reviewed, consultant data were submitted for 571 of the 770 projects whose files we reviewed. The percentage of projects where consultant data were submitted varied by district, from a low of 55 percent of the Omaha District’s projects (140 of 255 files) to a high of 94 percent of the Jacksonville District’s projects (131 of 140 files). Several project managers cautioned that the data represented by the maps and photographs are, at times, not accurate because the data are old or have not been verified by the agencies that prepared the maps and photographs. As noted above, many National Wetlands Inventory maps were prepared based on aerial photography from the 1980s. In addition, because of the large scale of the maps, they do not always accurately capture all wetlands, particularly wetland types that are difficult to detect from aerial photographs, such as small forested wetlands. Further, in some instances the maps and photographs do not provide clear evidence of whether a water or wetland is jurisdictional. In such cases, project managers told us that site visits are the best data source for making a determination. This is particularly common for projects located near a roadway or an area that has been extensively developed. Similarly, features such as culverts and low-lying areas that would often serve to connect an otherwise isolated water to a jurisdictional water are not always visible in topographic maps, and aerial photographs and a site visit may be the only means of determining whether such connections do in fact exist. Other factors that influence whether a site visit is conducted, according to Corps project managers, include the proximity of the project site to the Corps’ office and resources available to travel to the site, the nature of the topography and the number of waters or wetlands that appear to be on the project site, a project manager’s familiarity with the geographic area where the project is being undertaken, the potential for public concern over the proposed project, the size of the waters or wetlands on the project site and their value, the extent to which the data from all of the different data sources independently confirm the existence and nature of waters or wetlands on a project site as well as whether they are connected to “waters of the United States,” and the existence of any other federal, state, or local agency that may have oversight responsibility for waters or wetlands at the project site and whether officials from those agencies visited the site. In our review of project files, we found that project managers conducted site visits for 412 of the 770 projects whose files we reviewed. However, the extent to which site visits were conducted varied considerably by district, from a low of 34 percent of the St. Paul District’s projects (53 of 158 files) to a high of 84 percent of the Chicago District’s projects (124 of 148 files). This variability can be attributed, in part, to the size of the districts—the St. Paul District covers a broad area encompassing two states whereas the Chicago District covers only six counties in one state. Corps records provide limited information on the rationale that the project managers used when deciding not to assert jurisdiction over certain waters and wetlands. In August 2004, the Corps required that project managers include a standardized form in each of the project files. The form provides basic information about the project site and requires project mangers to provide rationales for their decisions to assert jurisdiction; however, rationales are not required for their nonjurisdictional determinations because it is assumed that this information would be included elsewhere in the project files. Corps appeals review officers and the Chief of the Regulatory Branch said that all files should contain rationales that are site- specific and provide the reasoning and evidence used to make the determination. However, the majority of the files we reviewed contained either rationales that provided little site-specific information about why the project managers made nonjurisdictional determinations or no explanations whatsoever. In August 2004, to improve the consistency, predictability, and openness of jurisdictional determination reporting practices, the Corps required that files contain a standardized form that is to include basic information about the project site, such as the location and size of the project. The form is also to be used by project managers to clearly indicate what data were used in making a determination and the bases for the determination—that is, the specific federal regulations that allowed the Corps to assert or precluded it from asserting jurisdiction. While the form requires that project managers include a rationale for asserting jurisdiction over waters on a project site, the form does not require that a rationale be included for a nonjurisdictional determination. According to the headquarters senior regulatory program manager responsible for overseeing jurisdictional determinations, the August 2004 form does not require that project managers include a rationale for their nonjurisdictional determinations because it was assumed that more detailed information would be included elsewhere in the project file. Corps appeals review officers we contacted said it is important for Corps files to contain the information specified on the August 2004 form. However, these officials told us it is important that all files, including nonjurisdictional determination files, contain detailed, site-specific rationales that provide the reasoning and evidence used to conclude whether the waters or wetlands were within federal jurisdiction in the event an appeal was filed, the project manager changed, or the Corps received a public inquiry. Corps appeals review officers said that a rationale should consist of (1) a detailed, site-specific commentary on how the on-site water does or does not connect with “waters of the United States”; (2) a description of what the data reviewed indicate; (3) a summary of the relevant hydrological conditions at the site; (4) a reference to any district-specific policy on asserting jurisdiction over waters that are considered adjacent to “waters of the United States” or navigable; and (5) a reason why the Corps concluded that the water is or is not jurisdictional. The Chief of the Regulatory Branch echoed the position of the appeals review officers. He told us it is important that the file support the Corps’ decision, particularly given public concern about the effect that SWANCC may have had on isolated, intrastate, nonnavigable waters. For example, since SWANCC, the Corps has received Freedom of Information Act requests from several environmental groups seeking information on nonjurisdictional determinations made by each of the Corps’ districts. The Chief of the Branch stated that the Corps must be able to respond quickly to such public inquiries and its decisions must be transparent and fully supported if the agency expects the public to have confidence in its regulatory decisions. However, we found that not all project managers are including a detailed rationale in the project files. Of the 770 nonjurisdictional determination files we reviewed, only 53 included a detailed rationale in the file. This ranged from a low of 4 percent of the Omaha District’s files (11 of 257) to a high of 31 percent of the Galveston District’s files (20 of 65). The examples in figure 7 illustrate site-specific rationales that explain how and why the Corps determined that it did not have jurisdiction. Unlike the examples in figure 7, most of the files—526—included only partial rationales that provide little in-depth, site-specific information that the project manager relied upon to conclude that the water is isolated. This ranged from a low of 46 percent of the Chicago District’s files (69 of 150) to a high of 83 percent of the Jacksonville District’s files (116 of 140). Figure 8 provides two examples of partial rationales. Many of the files we reviewed—191—did not contain any rationale to support the conclusion that the waters or wetlands under review were isolated. The percentage of files that contained no rationale also varied by district and ranged from a low of 12 percent of the Jacksonville District’s files (17 of 140) to a high of 49 percent of the Chicago District’s files (74 of 150). Two examples of files with no rationale that we reviewed are presented in figure 9. Although we did not assess the accuracy of the determinations made by the Corps in these cases, we are concerned that a lack of a detailed rationale limits the transparency of the Corps’ decision-making process and inhibits its ability to quickly respond to public inquiries and related challenges. The Corps does not separately allocate resources for jurisdictional determinations but instead includes these resources in the total available for issuing permits. Corps headquarters allocates resources to its eight divisions based primarily on the level they have received in prior years, and these divisions, in turn, allocate resources to the 38 districts on the same basis. The districts then allocate resources to carry out the regulatory program based on guidance issued in 1999. However, this guidance does not provide a separate program activity for jurisdictional determinations. Instead, the guidance directs the districts to allocate 60 percent to 80 percent of their resources to evaluating permits and 10 percent to 25 percent to ensuring that project proponents are in compliance with permit requirements. According to the Corps, about 80 percent of Corps resources are allocated to permitting, about 15 percent are allocated to enforcement and compliance, and about 5 percent are allocated to other activities. In four of the five districts we visited, staff responsible for evaluating permits perform jurisdictional determinations, while in the remaining district— Galveston—jurisdictional determinations are the responsibility of the compliance staff. District officials stated they do not know how much time is spent conducting jurisdictional determinations but that over the past several years their workloads have increased because of several factors, including SWANCC, while their budgets have not kept pace. As a result, they said their ability to effectively perform regulatory program activities, including making jurisdictional determinations, has been impacted, as can be seen in the following examples. Omaha District officials said that because of budget constraints and heavy workloads, the district is unable to visit most project sites in evaluating permits and making jurisdictional determinations. The district is responsible for six states, and while it has an office in each of the states, site visits can frequently entail significant travel costs. While project managers can occasionally obtain district approval to visit project sites, because of funding constraints they will do so only for large projects that potentially affect valuable water resources. Although district officials told us that site visits are not always necessary, they stressed that site visits may be the best way to determine if the water or wetland is jurisdictional because the maps and other data that project managers review in the office may not clearly indicate whether connections to other waters exist. In the Galveston District, officials told us that, in the past, their project managers’ workload averaged about 60 regulatory projects at any given time, but this workload is now significantly more. One project manager estimated that his workload is about four times greater than it should be. As a result, project managers are unable to make as many site visits as they have in the past. While Galveston District officials agreed with Omaha District officials that site visits are not always necessary, they pointed out that nonjurisdictional determinations can be difficult to make and that site visits may be needed to verify that the waters or wetlands at a project site are isolated. According to the Corps Regulatory Branch Chief, the Corps’ workload has also increased because the complexity of each project has increased, and, as a result, more projects require that the Corps consult with other agencies, such as the Department of the Interior’s Fish and Wildlife Service, because of concerns about threatened or endangered species that may inhabit the project sites. In January 2003, the Inspector General also reported resource constraints as an issue affecting the Corps’ ability to effectively manage permit workloads. Resource constraints, according to the Regulatory Branch Chief, are having an even greater impact on the program because of the lack of reliable information on the number of regulatory activities that are accomplished and the amount of resources that are needed to accomplish those activities. To obtain better information, in 2004, the Corps initiated a Workload Indicator Project. This project is intended to address two issues: (1) the agencywide imbalance between resources and workload and (2) district-level imbalances between resources and workloads. The project is also intended to link resources to measurable performance goals. As part of the project, in October 2004, Corps headquarters asked the districts to provide estimates on how much time is needed to complete 21 regulatory program tasks, such as making jurisdictional determinations, along with 103 associated subtasks, such as conducting a site visit as part of making a jurisdictional determination. According to the Chief of the Regulatory Branch, the estimates that the districts provided varied widely and will need to be refined over time. For example, some of these differences reflected the different nature of work required in some districts. In one district with many threatened and endangered species, the district estimated that it needed substantially more resources to evaluate permits because of the increased staff and time required to address environmental concerns. Other districts, such as those that cover wide geographic areas, estimated that they needed more resources to conduct site visits because of the additional time and travel costs to conduct them. However, this official said that some differences may reflect inaccurate estimates of the time required to complete some of the tasks or subtasks because districts have never had to break down their workload in such detail. Despite the preliminary nature of the estimates, the Corps used them in fiscal year 2005 to allocate a 1 percent across-the- board regulatory program funding level increase to the districts. Based on the results of the Workload Indicator Project, eight districts were each allocated an additional $120,000 to, among other things, address their workload and performance. According to the Chief of the Regulatory Branch, the Workload Indicator Project estimates will be refined over time as the agency gains more experience using them, and it is believed that this effort will go a long way in supporting future budget requests. We identified several additional challenges that the Corps will face as it incorporates the workload indicator estimates when developing budget proposals and allocating resources to the districts. First, the Corps’ data management systems cannot yet provide accurate and complete information on the number of regulatory actions, including jurisdictional determinations, completed by each district. The Corps is currently phasing in a new data management system that, according to agency officials, should be able to provide the required information, although it will not provide 100 percent of the data the agency believes necessary to make management decisions. According to the Chief of the Regulatory Branch, this system is expected to be fully operational by the end of fiscal year 2006 if the Corps receives the funding needed to correct user accessibility and data integration problems and fully implement it. The Corps is also exploring options for obtaining the additional data it may need to bridge the gap between the data management system and its proposed process for allocating resources. Second, the Corps will need time to make the transition from its current allocation method—based on historic allocations—to a method that is performance-based and reflects districts’ actual workloads. According to the Chief of the Regulatory Branch, a performance-based allocation process could result in shifting resources among districts. As noted above, the Corps allocates resources to its eight divisions based primarily on the levels they have received in prior years. According to the Corps, the divisions are then responsible for managing their resources and workloads from a regional perspective. According to Corps headquarters senior regulatory program managers, the divisions will be expected to reallocate resources among the districts to better meet individual district workloads and performance levels—such as, for example, issuing permits within specified time frames. Such resource reallocations could be accomplished by temporarily assigning project managers to districts that are experiencing larger workloads or poorer performance levels, or by having districts send permit applications to other districts for evaluation. The Corps is generally not using 33 C.F.R. § 328.3(a)(3) as the sole basis to assert jurisdiction over isolated, intrastate, nonnavigable waters. In February 2004, we reported that between January 2003 and January 2004, the districts sought formal project-specific headquarters approval a total of eight times before attempting to assert jurisdiction over isolated, intrastate, nonnavigable waters based solely on 33 C.F.R. § 328.3(a)(3). According to EPA officials, in three of the cases, the agencies ultimately determined that the waters in question were “waters of the United States” based on factors other than those identified in that regulatory provision. In two cases, the Corps and EPA determined that the waters in question were not jurisdictional; and, in another case, the district withdrew its request for headquarters approval. Two of the cases have yet to be resolved, even after 1-1/2 years, according to the senior regulatory program manager who is the focal point for coordinating such cases. This official told us that no additional requests to use this section of the regulations as the sole basis to assert jurisdiction have been submitted to headquarters since January 2004. Corps district officials told us they generally do not consider seeking jurisdiction over any isolated, intrastate, nonnavigable waters on the sole basis of 33 C.F.R. § 328.3(a)(3) primarily because (1) headquarters has not provided detailed guidance on when it is appropriate to use this provision; (2) district offices believe that headquarters does not want them to assert jurisdiction over these waters or wetlands; (3) district offices are concerned about the amount of time that might be required for a decision from headquarters; or (4) few isolated, intrastate, nonnavigable waters were in their districts whose use, degradation, or destruction could affect interstate commerce. Because of concern about using 33 C.F.R. § 328.3(a)(3), Corps officials in the St. Paul, Omaha, and Jacksonville districts told us that they limit asserting jurisdiction over isolated and intrastate waters only when public boat ramps are present to provide access to these waters. The senior regulatory program manager acknowledged that the lack of guidance and the lengthy time frames for receiving headquarters approval may have caused some districts to be reluctant to use 33 C.F.R. § 328.3(a)(3) as the sole basis for asserting jurisdiction. To clarify the process for seeking guidance and to establish time frames for obtaining headquarters approval, in January 2005, the Corps drafted a memorandum of agreement that (1) identifies a process for the Corps and EPA to follow when consulting on such requests, including procedures to follow when the agencies disagree; (2) lists the types of documentation that districts are to submit along with their referrals; and (3) establishes time frames for responding to the districts. This draft memorandum was shared with EPA in March 2005. As of July 2005, the two agencies agree that it would be helpful to develop additional guidance for the districts that would provide a clear understanding for using this section of the regulations. However, the agencies have yet to resolve differences regarding the content of the memorandum. This is delaying finalizing the memorandum, and, while discussions are continuing, the agencies have set no time frame for resolving these differences. Neither the Corps nor EPA is collecting data to fully assess the impact of SWANCC on waters and wetlands that no longer fall under federal jurisdiction. The Corps began collecting data in April 2004, at EPA’s request, in an effort to respond to congressional, project proponent, and public concerns about how field offices are applying the SWANCC ruling. However, the data being collected are limited and of questionable value for use in assessing the impact of SWANCC on aquatic resources. The agencies would like to collect better data, but these data are either not available or would be difficult to obtain. According to Corps and EPA officials, limited resources prevent them from collecting the additional data and conducting an in-depth analysis that would be required to fully assess the impact of SWANCC. Neither the Corps nor EPA is collecting data that would allow a full assessment of the impact of the SWANCC ruling on isolated, intrastate, nonnavigable waters. In January 2003, EPA and the Corps requested that the public provide them with information, data, and comments on, among other things, the amount of wetland acreage potentially affected by the SWANCC ruling, as well as the function and values of wetlands and other waters that might be affected by the SWANCC ruling. The Corps and EPA received about 130,000 comments, including those from states that estimated that many of the intrastate, nonnavigable waters in their states would be considered isolated as a result of the ruling. For example, Wisconsin estimated that of its 5.3 million acres of wetlands, about 1.1 million would no longer fall under federal jurisdiction. Texas estimated that because only about 21 percent of its 80,000 miles of rivers and streams were perennial, approximately 79 percent would not be considered navigable and thus subject to federal regulation. Similarly, Texas estimated that some of its 304,000 acres of inland lakes and reservoirs would no longer be subject to federal regulation. To obtain information to respond to congressional, project proponent, and public concerns about how field offices are applying the SWANCC ruling, in October 2003, the Corps agreed to an EPA request to document all nonjurisdictional determinations. Specifically, beginning in April 2004, the Corps agreed to have district offices fill out a form for each project where the project managers make a nonjurisdictional determination and report these on a quarterly basis for 1 year. In requesting this information, EPA stated that it would, among other things, (1) better enable an assessment of the extent and nature of resources impacted by SWANCC, (2) help foster consistent and sound decision-making, and (3) help identify issues that might benefit from increased headquarters attention or guidance. These nonjurisdictional determination forms are being posted on each district’s Web site. According to a senior regulatory program manager, even though the initial 1-year period has elapsed, for the near future the Corps is continuing to fill out the form to collect the data. The data being collected include the estimated size of the isolated water or wetland; the approximate size of the project site and its latitude and longitude; the name of the waterway where the project site is located; the type of water, such as prairie pothole, playa lake, vernal pool, or whether the water or wetland might be used as habitat for birds protected by migratory bird treaties or other migratory birds that cross state lines; whether the water would be used as habitat for endangered species; and if the water or wetland is used to irrigate crops sold in interstate commerce. However, the data being collected by the Corps and EPA is inadequate to fully assess the impact of SWANCC on isolated, intrastate, nonnavigable waters. Specifically, the data being collected do not reflect the actual size of the nonjurisdictional water or wetland or the amount of the water or wetland that may be impacted by the project. The data collection form directs the project managers to categorize the size of the wetland found to be nonjurisdictional as being less than 1 acre, 1 to 3 acres, 3 to 5 acres, 5 to 10 acres, 10 to 25 acres, 25 to 50 acres, or greater than 50 acres. Moreover we noted differences in the way that project managers are recording the acreage. For example, some project managers are including specific information on the number and actual size of the wetlands, while others are merely placing checkmarks in one of the categories. Additionally, some project managers are classifying almost all of the nonjurisdictional waters as wetlands even though they may not meet the Corps’ definition of a wetland, thereby obscuring impacts of SWANCC to both wetland and nonwetland waters. Further, the form asks only for the general size of the waters or wetlands found to be nonjurisdictional, and not what portion of the waters or wetlands on the site will be degraded or destroyed by the development. According to project managers, they may not have specific information on the project planned by the project proponent at the time of the jurisdictional determination and, as a result, may be unable to determine how the project will affect the waters or wetlands on the site. Further, if none of the waters or wetlands on a project site are jurisdictional, a permit is not required under the Clean Water Act, and thus, project managers may have little information, if any, about specific plans for any eventual development on the site. The data being collected on the form also may not provide reliable or sufficient information on the functional value of the waters. While the form requires that project managers indicate whether the water is or could be used as habitat by migratory birds or endangered species, the form may not be capturing reliable information because the project managers may not always know this information. One project manager said he has no expertise on the birds that are protected by migratory bird treaties or which species might be endangered; as a result, he was unsure how to fill out the form. According to another project manager, the staff was discouraged from indicating whether the water could be used as habitat by birds or other species unless they had proof that it was actually used in this manner. As a result, the data collected by the Corps may not accurately reflect the number of instances where the Corps has determined that waters and wetlands are nonjurisdictional but they may be, or are used as, among other things, habitat by migratory birds. According to Corps and EPA officials, while they have analyzed some of the data collected, to date, limited resources prevent them from conducting a more in-depth analysis of the data to assess the impact of SWANCC on aquatic resources. Because of limited resources, according to a senior regulatory program manager and EPA officials, neither agency is planning to conduct a more in-depth analysis of data already collected. Even though the 1-year data collection period has expired, the Corps is still using the form to collect data for the near term. The Corps is, however, re-examining its data collection effort by, for example, revising the form, in coordination with EPA, to both shorten it and capture more relevant data. According to the senior regulatory program manager working on this effort, one of the issues needing to be resolved is what data are most relevant. Both EPA and Corps officials recognize that the data being collected has its limitations, but they stated they did not want any data collection effort to be overly burdensome on project managers, given the limited resources available to collect and record the data. In addition, a Corps senior regulatory program manager said the agency has no mandated authority to further collect and analyze the data for nonjurisdictional determinations once that determination has been made. Further, doing so only detracts from its primary mission of evaluating permits. The type of data that would need to be collected to fully assess the impact of SWANCC on aquatic resources are either not readily available or would take extensive resource investments that neither EPA nor the Corps has. For example, data are needed on waters or wetlands that are impacted without notification to either the Corps or EPA. According to officials from both agencies, since SWANCC, project proponents do not always contact the Corps for a jurisdictional determination. Instead, they proceed with site development without any notification. Currently, neither the Corps nor EPA has a means to determine the extent to which this occurs. For those project proponents who do notify the Corps, data challenges remain extensive. According to several project managers, the Corps would need to collect data on the exact acreage of the water determined to be isolated, but collecting this information may be problematic if the project proponent does not provide it because the Corps lacks resources to measure waters over which it has no jurisdiction. Other project managers said that data would need to be collected on the extent to which the waters, even though they may not have a surface-water connection to other waters, are nearby other waters—all of which may have an underground water connection. Data are also necessary on the nature of the functional value these water systems provide. Several other project managers indicated that data would be needed on the extent and nature of waters that were considered jurisdictional prior to SWANCC to provide a baseline to measure the impact of SWANCC. However, project managers said these types of data are either not readily obtainable or available. Project managers’ concerns about the need for additional data were also echoed in a journal of the Society of Wetland Scientists. In a series of articles on SWANCC, the society identified information gaps and areas for future research that could help assess the impact of SWANCC. These include the lack of (1) a consistent definition of an isolated wetland; (2) knowledge of the number and area of isolated wetlands in the United States; (3) information on the diversity of isolated wetlands relative to each other and to other ecosystems; (4) knowledge about other federal, state, tribal, and local programs that may protect isolated wetlands; and (5) information on how isolated wetlands, wetland complexes, and other at-risk waters contribute, hydrologically, chemically, and biologically to “waters of the United States.” Neither agency believes that it is possible to easily develop and readily implement a realistic approach that would allow them to fully assess the impact of the ruling on federal jurisdiction under the Clean Water Act, given the lack of some data, the vast amount of data that would be needed to assess the impact of SWANCC, and current resource constraints. However, according to EPA officials, even though the agencies may not be able to conduct a thorough assessment of the impacts of SWANCC on the nation’s aquatic resources, it is important to collect data on the number and nature of the Corps’ nonjurisdictional determinations and make this data publicly available to increase the transparency and predictability of nonjurisdictional decisions. However, the data collected should not, according to some project managers, mislead the public into erroneously concluding what impact SWANCC has had on isolated, intrastate, nonnavigable waters. In the aftermath of SWANCC, the Corps has taken some positive steps to increase the consistency, predictability, and openness of its jurisdictional determinations. However, although the Corps now requires its project managers to include rationales in their files that explain how and why the decision that certain waters or wetlands fall within federal jurisdiction was made, it does not require similar rationales for nonjurisdictional determinations. As stated by Corps appeals review officers and the Chief of the Regulatory Branch, the Corps should require detailed rationales for all jurisdictional determinations and not just those where it is asserting jurisdiction. Without this information in the file, the Corps will not be able to easily replicate its decisions, limiting its ability to quickly respond to an appeal or public inquiry. Furthermore, the lack of guidance from headquarters and the lengthy time frames that may be involved in receiving a decision from headquarters have discouraged Corps districts from asserting jurisdiction using the provisions under 33 C.F.R. § 328.3(a)(3). Since January 2001, the Corps and EPA have not been able to agree on the procedures the districts should follow when requesting the use of this provision to assert jurisdiction and have been unable to develop a process for the Corps and EPA to follow when consulting on such requests. Until the agencies finalize these procedures, Corps districts will have little incentive to use 33 C.F.R. § 328.3(a)(3) as a basis for asserting jurisdiction over certain waters and wetlands that may, in fact, be subject to Clean Water Act requirements. To provide greater transparency in the Corps’ processes for making nonjurisdictional determinations, we are recommending that the Secretary of the Army require the Corps to include in its project files explanations for nonjurisdictional determinations, as it does for jurisdictional determinations, and that these explanations be detailed and site-specific. To help provide greater clarity to the districts when using 33 C.F.R. § 328.3(a)(3) as the sole basis for asserting jurisdiction, we are also recommending that the Secretary of the Army, through the Corps, and the Administrator of EPA complete the process of jointly developing procedures that, at a minimum, include guidance for the type of information that districts should submit to headquarters, actions each agency is responsible for taking, time frames for each agency to complete their reviews, and provisions for resolving any interagency disagreement. We provided a draft of this report to the Secretary of the Department of Defense and the Administrator of EPA for review and comment. Both the Department of Defense and EPA concurred with the report’s findings and recommendations. In its comments, the Department of Defense stated that it is working with EPA to further streamline reporting requirements and improve documentation required to support all determinations. The department also pointed out that negotiations are ongoing to develop procedures for field staff to use when relying on 33 C.F.R. § 328.3(a)(3) as the sole basis for asserting jurisdiction. In its written comments, EPA pointed out that the Corps’ practice of collecting and posting nonjurisdictional determination information on the districts’ Web sites has been a part of the two agencies’ goal to increase transparency, predictability, and consistency of the regulatory program. EPA also noted that an important step in achieving this goal is for Corps districts and EPA regional offices to work closely together on cases involving geographically isolated waters. EPA commented that the process for doing so should allow the agencies to ensure more consistent application of the regulations, while taking into account all relevant information about a particular body of water. Both the Department of Defense and EPA provided technical comments and clarifications which we incorporated, as appropriate. The Department of Defense’s and EPA’s written comments are presented in appendixes III and IV, respectively. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees and Members of Congress; the Secretary of Defense; the Administrator, EPA; and the Chief of Engineers and Commander, U.S. Army Corps of Engineers. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To identify the processes and data the U.S. Army Corps of Engineers (the Corps) uses to make jurisdictional determinations, we reviewed federal regulations and the Corps’ related guidance. We also interviewed Corps officials in headquarters and 5 of the Corps’ 38 districts—Chicago; Galveston, Texas; Jacksonville, Florida; Omaha, Nebraska; and St. Paul. We selected 4 of the 5 districts because they made more nonjurisdictional determinations between April and December 2004 than any of the other 38 districts. We selected the fifth district—Galveston—because it also accounted for a large number of nonjurisdictional determinations and was located in a geographic region different than the other four districts. Altogether, these five districts accounted for 58 percent of the nonjurisdictional determinations the Corps made between April and December 2004. This time period was selected because data on the Corps’ nonjurisdictional determinations were not readily available before April 2004. To determine the extent to which the Corps documents its decisions when it concludes that it does not have jurisdiction over certain waters and wetlands, we reviewed 770 project files in the five selected districts where the agency determined, between April and December 2004, that it did not have jurisdiction over some or all of the waters on those project sites. Specifically, we reviewed 150 files in the Chicago District, 65 in the Galveston District, 140 in the Jacksonville District, 257 in the Omaha District, and 158 in the St. Paul District. We used a data collection instrument to record specific data for each of the files, such as whether a site visit was conducted, a consultant was used by the project proponent, the project manager indicated what data were reviewed in the course of making a determination, and the different types of data that were included in the file. We also interviewed appeals review officers who review project files in the five districts to determine what documentation they believe is necessary to include in project files. We obtained this information from the appeals review officers because, until promulgating a standardized form in August 2004, the Corps had no guidance on what information on jurisdictional determinations should be contained in project files. Further, the Corps has no guidance on what a rationale should include. In addition, we contacted the appeals review officers because they are the agency’s internal quality assurance check to ensure that the Corps’ administrative records fully support jurisdictional determinations. We used the appeals review officers’ views on what information should be included in project files, including what constitutes a detailed rationale, as criteria in reviewing the files and categorizing each of the 770 files as having no rationale, a partial rationale, or a detailed rationale. To ensure that our initial file reviews were accurate, we randomly selected a minimum of 10 percent of the files and independently reviewed them a second time by comparing the information recorded in the data collection instrument to the original file to ensure that the information entered into the data collection instrument was accurate and that our assessment of the project manager’s rationale was reasonable. In reviewing the project files and analyzing project managers’ rationales, we did not evaluate whether project managers’ determinations were correct. We also did not evaluate whether the information available to the project managers in making their jurisdictional determinations was sufficient. To identify the process the Corps uses to allocate resources for making jurisdictional determinations, we reviewed its standard operating procedures and related guidance for carrying out the Corps’ section 404 regulatory program. We also interviewed Corps officials who are responsible, in headquarters and each of the five selected districts, for preparing resource estimates for carrying out the program. In addition, we obtained data on the number of resources allocated to the Corps and each of the districts as well as workload data, including the number of determinations made by the districts, for fiscal years 2002 and 2003. Finally, to obtain a broad overview of the program, we obtained historical program statistics for fiscal years 1997 through 2004. To determine the extent to which the Corps is asserting jurisdiction over isolated, intrastate, nonnavigable waters using its remaining authority in 33 C.F.R. § 328.3(a)(3), we interviewed Corps and Environmental Protection Agency (EPA) officials in headquarters to identify the number and nature of cases that have been submitted to headquarters between January 2003 and July 2005 for approval. We also interviewed district officials to determine the circumstances under which they would ask to assert jurisdiction using these Corps regulations and whether they had sought formal project- specific headquarters approval prior to using them. To determine the extent to which the Corps and EPA are collecting data to assess the impact of Solid Waste Agency of Northern Cook County v. U.S. Army Corps of Engineers (SWANCC), we interviewed Corps and EPA officials at their respective headquarters to identify what actions have been taken or are planned to assess the impact. We also obtained and reviewed forms being used to collect data on nonjurisdictional determinations made since April 2004. In addition, we interviewed Corps project managers to determine their views on the impact of SWANCC, whether data being collected were sufficient to assess the impact of SWANCC, and what data should be analyzed to assess the impact. We conducted our work from June 2004 through July 2005 in accordance with generally accepted government auditing standards. This appendix provides detailed information on the results of our review of 770 files in five Corps district offices—Chicago; Galveston, Texas; Jacksonville, Florida; Omaha, Nebraska; and St. Paul. Table 1 summarizes the number of nonjurisdictional determination files we reviewed in the five districts. Tables 2 through 6 summarize the types of data we found in the files we reviewed in the five districts. Most of the project proponents relied on the use of consultants to prepare or help prepare their jurisdictional requests or permit applications. The Omaha District had the fewest number of requests or applications that were prepared, in part, by consultants. Table 7 summarizes the number of project proponents that relied on the use of consultants. Project managers can conduct site visits in the course of making their jurisdictional determinations. The percentage of projects where site visits were conducted varied by district, with fewer site visits being conducted in St. Paul and Omaha. The St. Paul District encompasses two states, while the Omaha District has all or portions of six states. More site visits were conducted in the Chicago District, which covers a six-county area. The Jacksonville District also conducted site visits for the majority of its determinations. Even though this district encompasses the entire state, it has 12 field offices located around the state to reduce the geographic distance to project sites. Table 8 summarizes the number of projects where project managers conducted a site visit in each of the districts we visited. According to Corps appeals review officers, project files should clearly identify what data were used by project managers in the course of making their determinations, so that the data can be readily replicated if necessary. Even so, districts varied widely in the extent to which the project files contained this clear identification, as shown in table 9. Of all the districts, the Chicago District clearly identified the data used in almost all of the project files we reviewed. According to Corps appeals review officers, project files should also contain a basis for asserting or not asserting jurisdiction over any water or wetland on the project site. A basis is the regulatory authority used for asserting jurisdiction, or the reason for not asserting jurisdiction. As shown in table 10, almost all of the files we reviewed contained the basis for the determinations. According to Corps appeals review officers, in addition to a clear identification of data used and a basis for the determination, project files should contain a detailed rationale for the determination. A detailed rationale is one that is site-specific, references data used and how that data led to the project manager’s conclusion, and cites district policy with respect to district-specific practices for asserting jurisdiction over waters, such as what conditions must be met for a water to be adjacent to a “water of the United States.” Few files, however, contained a detailed rationale. The Galveston District had the largest percentage of project files that contained a detailed rationale. Table 11 summarizes the types of rationales included in the project files we reviewed in the five districts. In addition to the contact named above, Doreen Feldman, Curtis Groves, Anne Rhodes-Kline, Sherry McDonald, Ken McDowell, Marcia Brouns McWreath, Greg Peterson, Jerry Sandau, Carol Hernstadt Shulman, and Rebecca Spithill made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Section 404 of the Clean Water Act prohibits the discharge of dredged or fill material into federally regulated waters without first obtaining a U.S. Army Corps of Engineers (Corps) permit. Before 2001, the Corps asserted jurisdiction over most waters, including isolated, intrastate, nonnavigable waters, if migratory birds could use them. However, in January 2001, the U.S. Supreme Court concluded that the Corps exceeded its authority in asserting jurisdiction over such waters based solely on their use by birds. GAO was asked to examine, among other things, the (1) processes and data the Corps uses for making jurisdictional determinations; (2) extent to which the Corps documents decisions that it does not have jurisdiction; (3) extent to which the Corps is using its remaining authority to assert jurisdiction over isolated, intrastate, nonnavigable waters; and (4) extent to which the Corps and the Environmental Protection Agency (EPA) are collecting data to assess the impact of the court's January 2001 ruling. The five Corps districts included in GAO's review generally used similar processes and data sources for making jurisdictional determinations. After the districts receive a request for a determination, a project manager will review the submitted data for completeness, request additional data from the applicant, as necessary, and analyze the data to decide whether any waters are jurisdictional under the act. Data reviewed by project managers include photographs and topographic, soils, and wetland inventory maps that show, among other things, where the proposed project is located, whether other agencies have identified waters on the property, and whether there appears to be a basis for waters to be considered federally regulated under the act. Site visits are generally conducted when maps and photographs are not sufficiently detailed to make determinations. While GAO found that the Corps generally documents its rationale for asserting jurisdiction over waters or wetlands, it does not prepare similar documentation for nonjurisdictional determinations. Such rationales are important because determinations can be challenged by property owners and the public. GAO found that only 5 percent or less of the files in four of the five districts contained a detailed rationale, while 31 percent of the files in the fifth district contained such a rationale. The percentage of files that contained no rationale whatsoever as to why the Corps did not assert jurisdiction ranged from a low of 12 percent to a high of 49 percent in the five districts. The remaining files contained partial rationales. Following the Supreme Court's January 2001 ruling, the Corps is generally not asserting jurisdiction over isolated, intrastate, nonnavigable waters using its remaining authority. Since January 2003, EPA and the Corps have required field staff to obtain headquarters approval to assert jurisdiction over waters based solely on links to interstate commerce. Only eight cases have been submitted, and none of these cases have resulted in a decision to assert jurisdiction. According to project managers, they are reluctant to assert jurisdiction over these kinds of waters because of the lack of guidance from headquarters and perceptions that they should not be doing so. Although the Corps has drafted a memorandum that contains guidance for the districts, EPA and the Corps have not yet reached agreement on the content of the document. At EPA's request, over the last year, the Corps has collected data on field staffs' nonjurisdictional determinations, including limited data on wetlands impacted by the court's ruling. However, officials acknowledge that these data will be inadequate to assess the impacts of the ruling on wetlands jurisdiction. As a result, neither agency has conducted or plans to conduct an in-depth analysis of data already collected and they are re-examining their data collection efforts. Moreover, neither agency believes that an effective approach to fully assess the impacts of the ruling can be easily implemented because it would be resource intensive to do so and would require a vast array of data, some of which are not readily available.
DOD contingency operations, such as those in support of GWOT, can involve a wide variety of activities such as combating insurgents, training the military forces of other nations, and conducting small-scale reconstruction and humanitarian relief projects. Volume 12, chapter 23 of the DOD Financial Management Regulation, 7000.14-R establishes financial policies and procedures for contingency operations and generally guides the DOD components’ spending by defining what constitutes incremental costs and by providing examples of eligible incremental costs. The costs incurred for contingency operations include the pay of mobilized reservists, as well as the special pays and allowances for deployed personnel, such as imminent danger pay and foreign duty pay; the cost of transporting personnel and materiel to the theater of operation and supporting them upon arrival; and the operational cost of equipment such as vehicles and aircraft, among many other costs. Costs that are incurred regardless of whether there is a contingency operation, such as the base pay of active duty military personnel, are not considered incremental and therefore are funded in DOD’s base budget. DOD reports its GWOT-related costs in terms of obligations, which are incurred through actions such as orders placed, contracts awarded, services received, or similar transactions. When obligations are incurred, the DOD components enter them into their individual accounting systems. An obligation entry may include a number of different identifiers, including information such as funding source and the contingency operation, and the category of cost as determined by the individual component. Volume 12, chapter 23 of the DOD Financial Management Regulation directs components to capture contingency costs within their existing accounting systems and at the lowest possible level of organization. Individual obligation data that are coded as being in support of GWOT are recorded and sent through the component’s chain of command where they are aggregated at successively higher command levels. In a series of reports, we have identified numerous problems in DOD’s processes for recording and reporting obligations, raising significant concerns about the overall reliability of DOD’s reported obligations. In addition, DOD’s financial management has been on GAO’s list of high-risk areas requiring urgent attention and transformation since 1995. Factors affecting the reliability of DOD’s reported obligations include long- standing deficiencies in hundreds of nonintegrated financial management systems requiring manual entry of some data in multiple systems, and the lack of a systematic process to ensure that data are correctly entered into those systems. On its own initiative and in response to our recommendations, DOD has placed greater management focus on weaknesses in GWOT cost reporting, such as establishing additional procedures for analyzing variances in reported obligations and disclosing underlying reasons for significant changes. In addition, DOD established a Senior Steering Group in February 2007, including representatives from DOD, DFAS, and the military services, in an effort to standardize and improve the GWOT cost-reporting process and to increase management attention to the process. In conjunction with the Senior Steering Group, a GWOT Cost-of-War Project Management Office was established to monitor work performed by auditing agencies and to report possible solutions and improvements to the Senior Steering Group. It is tasked with leading initiatives in improving the credibility, transparency, and timeliness of GWOT cost reporting. DOD’s efforts are ongoing and we have continued to monitor its progress as GWOT cost reporting has evolved. DOD and the military services continue to take steps to improve some aspects of the accuracy and reliability of GWOT cost reporting. Some examples are discussed below. Because efforts to implement some of these initiatives are still in the early stages, their effect on the reliability of GWOT cost reporting is uncertain. DOD has undertaken several initiatives to improve the accuracy and reliability of its GWOT cost data. First, to promote the goal of continually improving its cost-of-war processes and reports, in February 2008, DOD required its components to statistically sample and validate their fiscal year 2008 GWOT obligation transactions on a quarterly basis beginning with the first quarter of fiscal year 2008. DOD also required its components to review randomly sampled non-GWOT obligations to determine whether the transactions were properly classified as non-GWOT versus GWOT. According to DFAS officials, the new requirement has improved the reliability of reported GWOT obligations because DOD components are taking actions to improve their GWOT cost reporting procedures and are making corrections when errors such as missing or illegible supporting documentation, missing codes, and miscoded transactions are found. DFAS plans to include a requirement to review and validate GWOT obligation data in an update to volume 3, chapter 8 of the DOD Financial Management Regulation. Second, DOD is initiating a new contingency cost-reporting system in fiscal year 2009 called the Contingency Operations Reporting and Analysis System. DOD’s goals are to automate the collection of GWOT cost data from DOD components and improve the timeliness of cost-of-war reporting. This system pulls elements of GWOT transaction data directly from DOD components’ accounting systems into its data store. Limited features of the system became available for use in October 2008 and it should be fully operational by September 2009. Upon completion of the project, this system should allow DOD and external users to have a consolidated location to view and analyze data for the cost of war, disaster relief, and all other contingencies. Users will have access through a Web browser and should be able to filter data and perform various analyses. Previously, the DOD components individually gathered and manually entered their GWOT cost data monthly into a template provided by DFAS for cost-of-war reporting. According to DFAS officials, the new system is designed to ensure better reliability and eliminate the possibility of manual errors. Third, DFAS is issuing a redesigned monthly cost-of-war report through the Contingency Operations Reporting and Analysis System, starting in fiscal year 2009, to replace DOD’s monthly Supplemental and Cost of War Execution Report, which was provided to external customers, including Congress, the Office of Management and Budget, and GAO. The first new cost-of-war report, commonly referred to as the Contingency Operations Status of Funds Report, was issued in December 2008 and covered costs for October 2008. According to DOD, this redesigned report should improve transparency over GWOT costs by comparing appropriated GWOT supplemental and annual funding to reported obligations and disbursements. The previous cost-of-war report displayed obligations (both monthly and cumulative by fiscal year) by appropriation, contingency operation, and DOD component, but did not compare obligations to appropriated funding. The military services have also taken actions to correct weaknesses in the reliability of their GWOT cost data. Examples for each of the services are discussed below. Since these actions have only recently been implemented, their effect on the reliability of GWOT cost reporting is uncertain. We found that the Marine Corps was not reporting obligations in descriptive cost categories in the DOD Supplemental and Cost of War Execution Report as required in volume 12, chapter 23 of the DOD Financial Management Regulation, which DOD established to provide better transparency over reported costs. Specifically, the Marine Corps was reporting obligations in the miscellaneous category of “other supplies and equipment” rather than the more descriptive cost categories. We brought this issue to the attention of both DFAS and the Marine Corps office responsible for submitting monthly cost data to DFAS. Marine Corps officials acknowledged the absence of the data and indicated that they would attempt to provide further breakdown of the Marine Corps’ reported obligations in future reports. In June 2008, the Marine Corps revised its cost-reporting procedures to provide further breakdown of reported obligations for “other supplies and equipment” in DOD’s cost-of- war reports. In addition, Marine Corps officials told us that in May 2008 they streamlined their cost-of-war reporting by centralizing their GWOT cost data-gathering and reporting procedures. Prior to this time, commands would individually submit their monthly GWOT cost data to Marine Corps headquarters. According to Marine Corps officials, the new procedures have improved the visibility and reliability of reported costs across the service, especially at the command level. We found that the Air Force was reporting some operation and maintenance obligations in the miscellaneous cost categories for “other supplies and equipment” and “other services and miscellaneous contracts” rather than reporting these obligations in the more descriptive cost categories that DOD had established. We brought this issue to the attention of both DFAS and the Air Force office responsible for submitting monthly cost-of-war data to DFAS. In response, the Air Force and DFAS revised the Air Force’s cost-reporting procedures so that costs could only be reported in the more descriptive cost categories. Our analysis of the fiscal year 2008 Army obligation data showed that the Army was misusing certain accounting codes to capture costs for GWOT contingency operations. Army officials told us that commands were incorrectly using these codes to record costs for activities that were not adequately funded in the base budget such as contracts for security guards and other anti-terrorism force-protection measures for facilities and installations located outside of the continental United States. Consequently, almost $2 billion in obligations for operation and maintenance was included in DOD’s cost-of-war report for costs that may not be directly attributable to GWOT contingency operations. In addition, the Army reported about $220 million in GWOT obligations for operation and maintenance costs associated with its modular restructuring initiative. Army officials told us that the Army modular restructuring initiative is not an incremental cost and therefore should not have been included in the cost-of-war report. The Army has addressed these issues for fiscal year 2009 by revising its cost code structure and eliminating cost codes that commands have misused in the past. During the course of our work, we found that the Navy lacked a centralized and documented process for its GWOT cost reporting. For example, Navy headquarters had little visibility over how lower-level commands record and report their GWOT costs. Moreover, the Navy’s cost-reporting process relied on the use of several computer-operated spreadsheets that required manual data input. The Navy also did not have formal guidance for GWOT cost reporting. In addition, our prior work had revealed that the Navy’s Atlantic Fleet and Pacific Fleet used different approaches for allocating a ship’s normal operating costs and GWOT costs. In September 2008, the Navy issued formal guidance for GWOT cost reporting in response to weaknesses in internal controls for contingency cost reporting that were identified as a result of its quarterly validations of GWOT obligation transactions. According to the Navy, the new guidance will increase the visibility of its costs, standardize its cost- reporting process for contingency operations, and increase the ability to audit its financial systems. Further, beginning in fiscal year 2008, the Atlantic Fleet and Pacific Fleet began using the same cost model for calculating how much of a ship’s total operating costs should be allocated to GWOT. This cost model estimates a ship’s GWOT operating costs by the number of days that it is deployed in support of a military operation. According to the Navy, this cost model is part of a broader initiative to improve and coordinate financial management processes at both the Atlantic Fleet and Pacific Fleet. Although DOD has taken steps to improve certain aspects of its GWOT cost reporting, its approach to identifying the costs of specific operations has, in some cases, resulted in the overstatement of costs, particularly for Operation Iraqi Freedom, and in other cases, for both contingencies. Since 2001, DOD has reported significant costs in support of Operation Iraqi Freedom and Operation Enduring Freedom. However, we found that reported costs for Operation Iraqi Freedom may be overstated due to weaknesses in DOD’s methodology for reporting its GWOT costs by contingency operation. Furthermore, the military services have reported some costs that are not directly attributable to the support of either Operation Iraqi Freedom or Operation Enduring Freedom. As of September 2008, DOD had reported total obligations of about $654.7 billion for GWOT, including about $508.4 billion, or 78 percent, for Operation Iraqi Freedom, about $118.2 billion, or 18 percent, for Operation Enduring Freedom, and about $28.1 billion, or 4 percent, for Operation Noble Eagle. As figure 1 shows, since fiscal year 2001, Operation Iraqi Freedom has accounted for the largest amount of total reported obligations among these three operations. However, DOD’s reporting of costs for GWOT does not reliably represent the costs of contingency operations, for reasons discussed below. We found that reported costs for Operation Iraqi Freedom may be overstated due to weaknesses in DOD’s methodology for reporting its GWOT costs by contingency operation. Volume 12, chapter 23 of the DOD Financial Management Regulation emphasizes the importance of cost reporting and requires DOD components to make every effort possible to capture and accurately report the cost of contingency operations. Furthermore, this regulation states that actual costs should be reported, but when actual costs are not available, DOD components are required to establish and document an auditable methodology for capturing costs. While the Army and Marine Corps are capturing totals for procurement and certain operation and maintenance costs, they do not have a methodology for determining what portion of these GWOT costs is attributable to Operation Iraqi Freedom versus Operation Enduring Freedom. For example, both military services reported their GWOT costs for procurement and certain operation and maintenance activities as costs exclusively attributable to Operation Iraqi Freedom, although a portion of these costs are attributable to Operation Enduring Freedom. In fiscal year 2008: The Army reported about $30.2 billion in GWOT procurement obligations as costs tied to Operation Iraqi Freedom and none as part of Operation Enduring Freedom, even though, according to Army officials, some of these costs were incurred in support of Operation Enduring Freedom. These reported obligations include both non-reset- related and reset-related procurement for items such as aircraft, munitions, vehicles, communication and electronic equipment, combat support, up-armored High Mobility Multipurpose Wheeled Vehicles, and countermeasures for improvised explosive devices. The Army reported obligations of about $8 billion for operation and maintenance associated with reset for Army prepositioned stocks, depot maintenance, recapitalization, aviation special technical inspection and repair, and field maintenance as part of Operation Iraqi Freedom but none for Operation Enduring Freedom, even though, according to Army officials, some of these costs were incurred in support of Operation Enduring Freedom. The Marine Corps reported $3.9 billion in procurement obligations as costs tied to Operation Iraqi Freedom but none as part of Operation Enduring Freedom, even though, according to Marine Corps officials, some of these costs were incurred in support of Operation Enduring Freedom. As in the case of the Army, these reported obligations include non-reset-related and reset-related procurement for various items. The Marine Corps reported obligations of about $1.1 billion for operation and maintenance for “reconstitution/resetting the force” as part of Operation Iraqi Freedom but none for Operation Enduring Freedom, even though, according to Marine Corps officials, some of these costs were incurred in support of Operation Enduring Freedom. The reason military service officials gave for not separating equipment- related costs between the two operations was that it was difficult to do so. Army officials told us that when actual costs cannot be clearly attributed to Operation Iraqi Freedom or Operation Enduring Freedom, they report all of these costs as part of Operation Iraqi Freedom since it is viewed as the larger of the two operations in terms of costs and funding. Marine Corps officials stated that they did not always know where GWOT equipment purchased with procurement appropriations ultimately went. These officials told us that they believed that the vast majority of the equipment was delivered to Iraq since, prior to April 2008, the bulk of Marine Corps forces had been deployed to Iraq in support of Operation Iraqi Freedom. While this assumption could be generally correct, without data on where equipment was delivered it is unclear what costs were incurred to support each operation. We observed that the command responsible for the acquisition and sustainment of war-fighting equipment for the Marine Corps did not have a cost code for Operation Enduring Freedom. As a result, all of the Marine Corps’ reported obligations for procurement and equipment-related operation and maintenance expenses were being coded in support of Operation Iraqi Freedom. Without a methodology for determining what portion of total GWOT obligations is attributable to Operation Iraqi Freedom or Operation Enduring Freedom, reported costs for Operating Iraqi Freedom may be overstated and cost information for both operations will remain unreliable. The Marine Corps reported about $1.4 billion in obligations for procurement and operation and maintenance in fiscal year 2008 in support of Grow the Force—a long-term force-structure initiative—as part of Operation Iraqi Freedom. Grow the Force is an initiative that was announced by the President in January 2007 to increase the active duty end-strength of the Army and Marine Corps. According to Marine Corps strategic guidance, this increase in force structure will provide the Marine Corps with additional resources needed to fight what the Marine Corps refers to as the “long war.” The guidance outlines the Marine Corps’ strategic plan for force employment to meet the need for counterinsurgency and building partnership capacity in support of the National Defense Strategy and multinational efforts in the “Global War on Terrorism/Long War.” The Marine Corps established a cost code for capturing Grow the Force costs. A Marine Corps official told us that they reported all obligations in support of Grow the Force as part of Operation Iraqi Freedom because, prior to April 2008, the majority of Marines deployed overseas were stationed in Iraq. Marine Corps officials at commands we visited told us that examples of their commands’ reported obligations for operation and maintenance in support of Grow the Force included civilian labor and infrastructure costs for bases and facilities located inside the United States. These officials further stated that these costs were necessary to accommodate the increased size of the force. Similarly, at one Marine Corps command, we found reported GWOT costs for the repair and renovation of sites and facilities located within the United States for the purpose of improving security against terrorism. Marine Corps officials at this command said that these security initiatives included costs for such items as barbed wired fences, automatic vehicle gates, automobile barricades, and security cameras. These officials further stated that Marine Corps headquarters instructed them to code these costs as part of Operation Iraqi Freedom. The Marine Corps reported about $42.4 million in obligations for operation and maintenance for these security costs in fiscal year 2008. The Air Force established a code for capturing “long war/reconstitution” operation and maintenance costs based on changes in DOD’s funding guidance for GWOT requests in fiscal year 2007. Air Force guidance defines “long war” costs as all incremental costs related to the war on terror beyond costs strictly limited to Operation Iraqi Freedom and Operation Enduring Freedom. These costs include reconstitution/reset costs for combat losses, accelerated wear and necessary repairs to damaged equipment or replacement to newer models when existing equipment is no longer available or economically feasible, and costs to accelerate specific force capabilities to carry out GWOT. Among the costs included in this code are forward-presence deployments or what the Air Force calls Theater Security Packages, which is a forward-basing concept involving both bombers and select fighter aircraft that is conducted in the Pacific Command area of responsibility. Air Force officials told us that because there is no category to report recurring or longer-term costs separately from established GWOT contingency operations, they report long war costs, including costs related to Theater Security Packages, as part of Operation Iraqi Freedom since it is the largest operation. The Air Force reported about $464 million in long-war costs for fiscal year 2008. The Navy reported costs for forward-presence missions as part of GWOT contingency operations even though the Navy routinely deploys its forces around the globe in peacetime as well as wartime. As these GWOT contingency operations have evolved over time, it has become increasingly difficult to determine what costs can be deemed as incremental expenses in support of these operations from costs that would have been incurred whether or not these contingency operations took place, such as ship operating costs for the Navy. For example, the Atlantic and Pacific surface commands, which are responsible for managing the Navy’s surface ships, reported obligations for costs associated with ship operations and port visits for ships deployed on forward-presence missions in the Western Pacific. Navy officials told us that some of these ships are stationed out of Hawaii, Japan, and Guam and operate near Malaysia, the Philippines, and Thailand. According to Navy officials, these ships are spending more time at sea and visiting more foreign ports in an effort to provide additional presence in support of GWOT. In 2008, Navy officials stated that Navy guidance expanded the definition of incremental costs in support of Operation Enduring Freedom to include those costs associated with forces operating in the Southern Command area of responsibility. We found that the Atlantic and Pacific surface commands reported obligations for ship operating costs and port visit costs for ships deployed on humanitarian missions in Central and South America. Navy officials said that ships deployed on humanitarian missions have visited countries such as El Salvador and Peru. These officials told us that the Navy considers the humanitarian missions to be GWOT-related because they benefit the security of the United States by spreading goodwill and reducing the expansion of terrorism in foreign nations. Costs for these missions are included within the Atlantic and Pacific surface commands’ ship operating costs for GWOT, which according to our analysis represented about 21 percent (about $875 million) of the Atlantic Fleet and Pacific Fleet’s total GWOT reported obligations for operation and maintenance (about $4.2 billion) in fiscal year 2008. Until DOD reconsiders whether expenses not directly attributable to specific GWOT contingency operations are incremental costs, the military services may continue to include these expenses as part of Operation Iraqi Freedom and Operation Enduring Freedom. Furthermore, reported costs for both operations may be overstated and costs not directly attributable to either operation may continue to be included in DOD’s GWOT funding requests rather than the base budget. In light of the nation’s long-term fiscal challenge and the current financial crisis, DOD will need a more disciplined approach to budgeting and evaluating trade-offs as it continues to support ongoing operations and prepares for future threats. As the department prepares additional GWOT funding requests for military operations in support of Operation Iraqi Freedom and Operation Enduring Freedom, reliable and transparent cost information will be of critical importance in determining the future funding needs for each operation. However, DOD’s approach to cost reporting does not reliably represent the costs of these contingency operations. Although DOD has reported significant costs for Operation Iraqi Freedom and Operation Enduring Freedom, the cost for Operation Iraqi Freedom may be overstated, since DOD does not have a methodology to determine what portion of its total reported GWOT obligations for procurement and certain operation and maintenance costs is attributable to each operation. Furthermore, it is difficult to determine whether some expenses not directly attributable to Operation Iraqi Freedom and Operation Enduring Freedom are actually incremental costs and incurred to support those operations. Expenses beyond those directly attributable to either operation may be more reflective of the enduring nature of GWOT and the United States’ changed security environment since 9/11 and thus should be part of what DOD would request and account for as part of its base budget. Due to the enduring nature of GWOT, its cost implications should be part of the annual base budget debate, especially in light of the competing priorities for an increasingly strained federal budget. In order to improve the transparency and reliability of DOD’s reported obligations for GWOT by contingency operation, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to (1) ensure DOD components establish an auditable and documented methodology for determining what portion of GWOT costs is attributable to Operation Iraqi Freedom versus Operation Enduring Freedom when actual costs are not available, and (2) develop a plan and timetable for evaluating whether expenses not directly attributable to specific GWOT contingency operations are incremental costs and should continue to be funded outside of DOD’s base budget. In written comments on a draft of this report, DOD agreed with our first recommendation and partially agreed with our second recommendation. The department’s comments are discussed below and are reprinted in appendix II. DOD agreed with our recommendation that it ensure its components establish an auditable and documented methodology for determining what portion of GWOT costs is attributable to Operation Iraqi Freedom versus Operation Enduring Freedom when actual costs are not available. In its comments, DOD noted that it believes its components, for the most part, have established formal guidance to strengthen internal controls and capture all costs associated with Operation Iraqi Freedom and Operation Enduring Freedom from within their accounting systems. However, DOD noted that the DOD Financial Management Regulation does include guidance for DOD components to develop auditable methodologies, and when actual cost by operation is not available, its components are required to internally document the methodology used to develop a derived estimate of the cost. DOD stated that it intends to strengthen the guidance in its Financial Management Regulation to require an annual review of the methodologies used to allocate these costs. DOD believes this action will help promote reasonable cost allocations and consistent cost-of-war reporting throughout the department. DOD partially agreed with our second recommendation that it develop a plan and timetable for evaluating whether expenses not directly attributable to specific GWOT contingency operations are incremental costs and should continue to be funded outside of DOD’s base budget. DOD noted that it has been reporting contingency costs for several years and its objective is to include all incremental costs attributable to the war effort. DOD also stated that, as part of its continuing efforts to improve both budgeting and reporting of war costs, it collaborated with the Office of Management and Budget to refine the criteria used for determining where costs will be budgeted, either in the base or contingency budgets, and ultimately reported. DOD noted that it will use the refined criteria to inform the development of portions of the fiscal year 2009 Overseas Contingency Operations Supplemental Request and the full fiscal year 2010 Overseas Contingency Operations Request, which has not yet been submitted to Congress. As a result, we have not yet been able to evaluate DOD’s actions to assess whether they meet the intent of our recommendation, but will review these actions when the budget requests are finalized and submitted to Congress. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Under Secretary of Defense (Comptroller); and the Director, Office of Management and Budget. In addition, the report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-9619 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To accomplish this review, we obtained and reviewed copies of the October 2007 through September 2008 monthly Department of Defense (DOD) Supplemental and Cost of War Execution Reports from the Office of the Undersecretary of Defense (Comptroller) to identify reported Global War on Terrorism (GWOT) obligations by contingency operation and appropriation account for the military services. We focused our review on the obligations reported for military personnel, operation and maintenance, and procurement, for the Army, Navy, Marine Corps and Air Force, both active and reserve forces, as these data represent the largest amount of GWOT costs. As we have previously reported, we have found the data in DOD’s Supplemental and Cost of War Execution Reports to be of questionable reliability. Consequently, we are unable to ensure that DOD’s reported obligations for GWOT are complete, reliable, and accurate, and they should therefore be considered approximations. In addition, DOD has acknowledged that systemic weaknesses with its financial management systems and business operations continue to impair its financial information. Despite the uncertainty about DOD’s obligation data, we are using this information because it is the only way to approach an estimate of the costs of the war. Also, despite the uncertainty surrounding the true dollar figure for obligations, these data are used to advise Congress on the cost of the war. To assess DOD’s progress in improving the accuracy and reliability of its GWOT cost reporting, we analyzed GWOT obligation data in DOD’s monthly Supplemental and Cost of War Execution Reports as well as the military services’ individual accounting systems. These systems included the Army’s Standard Financial System, the Navy’s Standard Accounting and Reporting System, the Marine Corps’ Standard Accounting, Budgeting and Reporting System, and the Air Force’s Commanders Resource Information System. We analyzed GWOT obligation data from these accounting systems to better understand the military services’ GWOT cost- reporting procedures and how they used these data to report costs in DOD’s monthly Supplemental and Cost of War Execution Reports. We then obtained and reviewed guidance issued by DOD and the military services regarding data analysis and methods for reporting obligations for GWOT. We also interviewed key officials from the Office of the Under Secretary of Defense (Comptroller), the Defense Finance and Accounting Service, the Army, Navy, Marine Corps, and Air Force to obtain information about specific processes and procedures DOD and the military services have undertaken to improve the accuracy and reliability of reported GWOT cost information. To assess DOD’s methodology for reporting GWOT costs by contingency operation, including the types of costs reported for those operations, we analyzed GWOT obligation data in DOD’s monthly Supplemental and Cost of War Execution Reports, including the source data for those reports in the military services’ individual accounting systems. As previously discussed, these systems included the Army’s Standard Financial System, the Navy’s Standard Accounting and Reporting System, the Marine Corps’ Standard Accounting, Budgeting and Reporting System, and the Air Force’s Commanders Resource Information System. We analyzed GWOT obligation data from these accounting systems to determine how the military services captured costs for specific contingency operations, including the types of costs they included as part of these contingency operations. We then obtained and reviewed guidance issued by DOD and the military services for identifying and reporting GWOT obligations by contingency operation. We also interviewed key officials from the Office of the Under Secretary of Defense (Comptroller), the Army, Navy, Marine Corps, and Air Force to determine how they interpreted and implemented this guidance. Headquarters, Department of the Army, Washington, D.C. U.S. Army Installation Management Command Headquarters, Army Materiel Command, Ft. Belvoir, Virginia Headquarters, U.S. Army Forces Command, Ft. McPherson, Georgia U.S. Army Central Command, Ft. McPherson, Georgia U.S. Army Installation Management Command, Southeast Region, Ft. Department of the Navy, Headquarters, Washington, D.C. Commander, Navy Installations Command Headquarters, Washington, D.C. Office of the Under Secretary of Defense (Comptroller) Washington, D.C. Office of Management and Budget, Washington, D.C. We performed our work from January 2008 through March 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Ann Borseth, Assistant Director; Richard Geiger; Susan Ditto; Linda Keefer; Ron La Due Lake; Deanna Laufer; Lonnie McAllister; Eric Petersen; and Joseph Rutecki made key contributions to this report.
Since September 11, 2001, Congress has provided about $808 billion to the Department of Defense (DOD) for the Global War on Terrorism (GWOT) in addition to funding in DOD's base budget. Prior GAO reports have found DOD's reported GWOT cost data unreliable and found problems with transparency over certain costs. In response, DOD has made several changes to its cost-reporting procedures. Congress has shown interest in increasing the transparency of DOD's cost reporting and funding requests for GWOT. Under the Comptroller General's authority to conduct evaluations on his own initiative, GAO assessed (1) DOD's progress in improving the accuracy and reliability of its GWOT cost reporting, and (2) DOD's methodology for reporting GWOT costs by contingency operation. For this engagement, GAO analyzed GWOT cost data and applicable guidance, as well as DOD's corrective actions. While DOD and the military services continue to take steps to improve the accuracy and reliability of some aspects of GWOT cost reporting, DOD lacks a sound approach for identifying costs of specific contingency operations, raising concerns about the reliability of reported information, especially on the cost of Operation Iraqi Freedom. Specifically, the department has undertaken initiatives such as requiring components to sample and validate their GWOT cost transactions and launching a new contingency cost-reporting system that will automate the collection of GWOT cost data from components' accounting systems and produce a new report comparing reported obligations and disbursements to GWOT appropriations data. Also, the military services have taken several steps to correct weaknesses in the reliability of their cost data. Limitations in DOD's approach to identifying the costs of Operation Iraqi Freedom and Operation Enduring Freedom may, in some cases, result in the overstatement of costs, and could lead to these costs being included in DOD's GWOT funding requests rather than the base budget. DOD guidance emphasizes the importance of accurately reporting the cost of contingency operations. However, while the Army and Marine Corps are capturing totals for procurement and certain operation and maintenance costs, they do not have a methodology for determining what portion of these GWOT costs are attributable to Operation Iraqi Freedom versus Operation Enduring Freedom and have reported all these costs as attributable to Operation Iraqi Freedom. In addition, the military services have reported some costs, such as those for Navy forward-presence missions, as part of Operation Iraqi Freedom or Operation Enduring Freedom, even though they are not directly attributable to either operation. In September 2005, DOD expanded the definition of incremental costs for large-scale contingencies, such as those for GWOT, to include expenses beyond direct incremental costs. This expanded definition provides no guidance on what costs beyond those attributable to the operation can be considered incremental and reported. Consequently, the military services have made their own interpretations as to whether and how to include costs not directly attributable to GWOT contingency operations. Without a methodology for determining what portion of GWOT costs is attributable to Operation Iraqi Freedom or Operation Enduring Freedom, reported costs for Operation Iraqi Freedom may be overstated. Furthermore, unless DOD reconsiders whether expenses not directly attributable to specific GWOT operations should be included as incremental costs, the military services may continue to include these expenses as part of Operation Iraqi Freedom and Operation Enduring Freedom, reported costs for both operations may be overstated, and DOD may continue to request funding for these expenses in GWOT funding requests instead of including them as part of the base budget. Expenses beyond those directly attributable to either operation may be more reflective of the enduring nature of GWOT and its cost implications should be part of the annual budget debate.
Various child nutrition programs have been established to provide nutritionally balanced, low-cost or free meals and snacks to children throughout the United States. The school lunch and school breakfast programs are among the largest of these programs. The National School Lunch Program was established in 1946; a 1998 expansion added snacks served in after-school and enrichment programs. In fiscal year 2000, more than 27 million children at over 97,000 public and nonprofit private schools and residential child care institutions received lunches through this program. The School Breakfast Program began as a pilot project in 1966 and was made permanent in 1975. The program had an average daily participation of more than 7.5 million children in about 74,000 public and private schools and residential child care institutions in fiscal year 2000. According to program regulations, states can designate schools as severe need schools if 40 percent or more of lunches are served free or at a reduced price, and if reimbursement rates do not cover the costs of the school’s breakfast program. Severe need schools were generally reimbursed 21 cents more for free and reduced-price breakfasts in school year 2000-01. The National School Lunch and School Breakfast Programs provide federally subsidized meals for all children; with the size of the subsidy dependent on the income level of participating households. Any child at a participating school may purchase a meal through the school meals programs. However, children from households with incomes at or below 130 percent of the federal poverty level are eligible for free meals, and those from households with incomes between 130 percent and 185 percent of the poverty level are eligible for reduced-price meals. Similarly, children from households that participate in three federal programs— Food Stamps, Temporary Assistant for Needy Families, or Food Distribution Program on Indian Reservations—are eligible to receive free or reduced-price meals. School districts participating in the programs receive cash assistance and commodity foods from USDA for all reimbursable meals they serve. Meals are required to meet specific nutrition standards. For example, school lunches must provide one-third of the recommended dietary allowances of protein, vitamins A and C, iron, calcium, and calories. Schools have a great deal of flexibility in deciding which menu planning approach will enable them to comply with these standards. Schools receive different cash reimbursement amounts depending on the category of meals served. For example, a free lunch receives a higher cash reimbursement amount than a reduced-price lunch, and a lunch for which a child pays full price receives the smallest reimbursement. (See table 2.) Children can be charged no more than 40 cents for reduced-price meals, but there are no restrictions on the prices that schools can charge for full-price meals. Various agencies and entities at the federal, state, and local levels have administrative responsibilities under these programs. FNS administers the school meal programs at the federal level. In general, FNS headquarters staff carry out policy decisions, such as updating regulations, providing guidance and monitoring, and reporting program review results. Regional staff interact with state and school food authorities, and provide technical assistance and oversight. State agencies, usually departments of education, are responsible for the statewide administration of the program, including disbursing federal funds and monitoring the program. At the local level, two entities are involved—the individual school and organizations called school food authorities, which manage school food services for one or more schools. School food authorities have flexibility in how they carry out their administrative responsibilities and can decide whether to delegate some tasks to the schools. To receive program reimbursement, schools and school food authorities must follow federal guidelines for processing applications for free and reduced-price meals, verifying eligibility for free or reduced-price meals, and counting and reporting all reimbursable meals served, whether full- price, reduced-price, or free. This means processing an application for most participants in the free and reduced-price programs, verifying eligibility for at least a sample of approved applications, and keeping daily track of meals provided. These processes comprise only a small part of the federal school meal programs’ administrative requirements. According to a USDA report, school food authorities spend the majority of their time on other administrative processes, including daily meal production records and maintaining records documenting that the program is nonprofit as required by regulations. The data we were asked to obtain focus on the participant eligibility and meal counting and reimbursement processes and do not include estimates for other administrative tasks, which are outside the scope of the request. The federal budget provides funds separate from program dollars to pay for administrative processes at the federal and state level. In contrast, officials at the local level pay for administrative costs from program dollars that include federal and state funding and student meal payments. Districts and schools that participate in the school meal programs vary in terms of locale, size of enrollment, percent of children approved for free and reduced-price meals, and types of meal counting systems used. We selected 10 districts and 20 schools located in rural areas, small towns, mid-size central cities, urban fringe areas of mid-size and large cities, and large central cities. At the districts, enrollment ranged from 1,265 to 158,150 children, while at the 20 schools, it ranged from 291 to 2,661 children. The rate of children approved for free and reduced-price meals ranged from 16.7 to 74.5 percent at the districts and from 10.5 to 96.5 percent at the schools. Nine of these schools used electronic meal counting systems. Table 3 summarizes the characteristics of selected districts and schools. For school year 2000-01, the estimated application process costs at the federal and state levels were much less than 1 cent per program dollar, and the median cost at the local level was 1 cent per program dollar. (See table 4.) At the federal and state levels, costs related to the application process were primarily for tasks associated with providing oversight, issuing guidance, and training throughout the year. At the local level, the costs varied, the tasks were primarily done at the beginning of the school year by the school food authorities, and different staff performed the tasks. Our limited number of selected schools differed in many aspects, making it difficult to determine reasons for most cost differences, except in a few instances. The estimated federal costs for performing the duties associated with the application process were small in relation to the program dollars. FNS headquarters estimated its costs were about $358,000. When compared with the almost $8 billion in program dollars that FNS administered throughout the 2000-01 school year, these costs were much less than 1 cent per program dollar. However, these costs did not include costs for FNS’s seven regional offices. At the one region we reviewed, which administered about $881 million program dollars, estimated costs were about $72,000 for this time period. FNS’s costs were related to its overall program management and oversight duties. FNS officials said that they performed duties and tasks related to the application process throughout the year. The primary tasks and duties performed by FNS headquarters and/or regional staff included the following: Updating and implementing regulations related to the application process. Revising eligibility criteria. Reviewing state application materials and eligibility data. Providing training to states. Responding to questions from states. Conducting or assisting in reviews of the application process at the state and school food authority levels, and monitoring and reporting review results. Estimated costs incurred by the five selected states ranged from $53,000 to $798,000 for performing tasks related to the application process, while the total program dollars administered ranged from $122 million to $1.1 billion. For four of the five states we reviewed, total application costs were generally in proportion to the program dollars administered. However, the estimated application costs for one state were higher than for other selected states with significantly larger programs. Officials from this state attributed these higher costs to the large number of districts in that state compared with most other states. At the state level, costs were incurred primarily for providing guidance and training to school food authority staff and for monitoring the process. Just as at the federal level, state level officials said that they performed their application process duties throughout the year. These tasks included updating agreements with school food authorities to operate school meal programs, preparing prototype application forms and letters of instruction to households and providing these documents to the school food authorities, and training managers from the school food authorities. State officials also reviewed the application process as part of required reviews performed at each school food authority every 5 years. For the sites we reviewed, the estimated median cost at the local level to perform application process tasks was 1 cent per program dollar and ranged from less than half a cent to about 3 cents. The school food authorities incurred most of the application process costs—from about $3,000 to nearly $160,000, and administered program dollars ranging from about $315,000 to nearly $18 million. Not all schools incurred application process costs, but for those that did, these costs ranged from over $100 to as much as $3,735. The schools reviewed were responsible for $65,000 to $545,000 in program dollars. Table 5 lists the estimated application process costs, program dollars, and cost per program dollar for each of the school food authorities and schools included in our review. At the local level, the costs associated with conducting the application process for free and reduced-price meals were primarily related to the following tasks: Downloading the prototype application and household instruction letter from the state’s Web site and making copies of it before the school year begins. Sending the applications and household instruction letters home with children at the beginning of the school year or mailing them to the children’s homes. Collecting completed applications that were either returned to school or mailed to the district office. Reviewing applications and returning those with unclear or missing information, or calling applicants for the information. Making eligibility determinations for free or reduced-price meals. Sending letters to applicants with the results of eligibility determinations for free or reduced-price meals. Preparing rosters of eligible children. Most of the application process tasks were performed at the beginning of the school year because parents must complete a new application each year in order for their children to receive free or reduced-price meals.Some applications are submitted throughout the school year for newly enrolled or transferred children or children whose families have changes to their household financial status. Program regulations direct parents to notify school officials when there is a decrease in household size or an increase in household income of more than $50 per month or $600 per year. Staff at 8 of the 10 school food authorities performed most of the application tasks for all schools that they managed. For the 2 other school food authorities, the schools reviewed performed most of the application tasks. Sixteen of 20 schools distributed and collected the applications. However, 4 schools did not distribute applications because their school food authorities mailed applications to households instead. Various staff supported the application process at the school food authorities and the schools. Two school food authorities hired temporary workers to help process the applications at the start of the school year, and the costs at these locations were below the median. Several schools involved various nonfood service staff in the process. At one school guidance counselors and teachers helped distribute and collect applications. At another school, a bilingual community resource staff person made telephone calls to families to help them apply for free and reduced-price meals. Clerical workers copied and pre-approved applications at two schools, and at another school, the school secretary collected the applications and made eligibility determinations. While the variation in the staff assigned to perform application duties may account for some cost differences, the limited number of selected schools and their related school food authorities differed in many aspects, making it difficult to determine reasons for most cost differences, except in a few instances. In one case, we were able to compare two schools and their related school food authorities because the two schools had some similar characteristics, including size of school enrollment, grade span, and percentage of children approved for free and reduced-price school meals. However, the school food authorities differed in size and locale. At these two schools, the combined costs—costs for the school and its share of the related school food authority’s costs for processing applications—differed. The combined costs at one school were almost 3 cents per program dollar, while the combined costs at the other school were less than 1 cent per program dollar. The school with the higher costs enlisted teachers and guidance counselors to help hand out and collect applications and was part of a smaller school food authority that used a manual process to prepare a roster of eligible children. The other school did not perform any application process tasks, since these tasks were done centrally at the school food authority. This school was part of a district that had a much higher enrollment and an electronic system to prepare a roster of eligible children. For the remaining 18 schools, we were generally not able to identify reasons for cost differences. For the 2000-01 school year, the estimated costs per program dollar for the verification process were much less than 1 cent at the federal, state, and local levels. (See table 6.) At the federal and state levels, the costs of verifying eligibility for free and reduced-price meals were primarily related to oversight tasks performed throughout the year. At the local level, duties associated with the verification process were done in the fall of the school year. Only one school food authority significantly involved its schools in the verification process. At the 10 selected school food authorities, the verification process resulted in some children being moved to other meal categories, because households did not confirm the information on the application or did not respond to the request for verification documentation. FNS has implemented several pilot projects for improving the application and verification processes and plans to complete these projects in 2003. For school year 2000-01, the estimated costs at the federal and state levels for performing duties associated with the verification process were much less than 1 cent per program dollar. The estimated costs at FNS headquarters of about $301,000 and the estimated costs at the selected FNS region of about $28,000 were small in relation to the program dollars administered—about $8 billion and $881 million, respectively. FNS performed a number of tasks to support the verification process. FNS officials said that during the year the primary tasks that staff at headquarters and/or regions performed included the following: Updating regulations and guidance related to the verification process. Holding training sessions. Responding to questions from states and parents. Clarifying verification issues. Reviewing state verification materials and data. Conducting or assisting in reviews of the process at the state and school food authority levels. Monitoring and reporting review results. Costs incurred by the selected states ranged from about $5,000 to $783,000 for performing tasks related to the verification process. During this period, these states administered $122 million to $1.1 billion program dollars. States incurred costs associated with overseeing and monitoring the verification process and performed many tasks throughout the year. The primary state task involved reviews of the verification process, where states determined whether the school food authorities appropriately selected and verified a sample of their approved free and reduced-price applications by the deadline, confirmed that the verification process was completed, and ensured that verification records were maintained. In addition to the review tasks, state officials provided guidance and training to school food authority staff. The selected school food authorities’ costs ranged from $429 to $14,950 for the verification process tasks, while costs at selected schools, if any, ranged from $23 to as much as $967. Schools reported few, if any, costs because they had little or no involvement in the verification process. During school year 2000-01, the school food authorities administered program dollars ranging from about $315,000 to over $28 million, and the schools were responsible for program dollars ranging from about $65,000 to $545,000. The estimated median cost at the local level—school food authorities and schools combined—was much less than 1 cent per program dollar. Table 7 lists the estimated verification process costs, program dollars, and cost per program dollar for each of the school food authorities and schools included in our review. At the local level, costs associated with verifying approved applications for free and reduced-price school meals were for duties performed primarily in the fall of the school year. Each year school food authority staff must select a sample from the approved applications on file as of October 31 and complete the verification process by December 15. According to USDA regulations, the sample may be either a random sample or a focused sample. Additionally, the school food authority has an obligation to verify all questionable applications, referred to as verification “for cause.” However, any verification that is done for cause is in addition to the required sample. Furthermore, instead of verifying a sample of applications, school food authorities may choose to verify all approved applications. Also, school food authorities can require households to provide information to verify eligibility for free and reduced-price meals at the time of application. This information is to be used to verify applications only after eligibility has been determined based on the completed application alone. In this way, eligible children can receive free or reduced-price school meals without being delayed by the verification process. Of the 10 selected school food authorities, 7 used a random sample method and 3 used a focused sample method. At the local level, the costs associated with verifying approved applications for free and reduced-price meals were primarily related to the following tasks: Selecting a sample from the approved applications on file as of October 31. Providing the selected households with written notice that their applications have been selected for verification and that they are required to submit written evidence of eligibility within a specified period of time. Sending follow-up letters to households that do not respond. Comparing documentation provided by the household, such as pay stubs, with information on the application to determine whether the school food authority’s original eligibility determination is correct. Locating the files of all the siblings of a child whose eligibility status has changed if a school district uses individual applications instead of family applications. Notifying the households of any changes in eligibility status. Generally, the selected school food authorities performed most of the verification tasks, while the schools had little or no involvement in the process. However, the schools in one school food authority did most of the verification tasks, and the tasks performed by the school food authority were limited to selecting the applications to be verified and sending copies of parent notification letters and verification forms to the schools for the schools to distribute. The costs at these two schools were less than 1 cent per program dollar. The verification process is intended to help ensure that only eligible children receive the benefit of free or reduced-price meals, and at the locations we visited, the verification process resulted in changes to the eligibility status for a number of children. During the verification process, generally, household income information on the application is compared with related documents, such as pay stubs or social security payment information. When the income information in the application cannot be confirmed or when households do not respond to the request for verification documentation, the eligibility status of children in the program is changed. That is, children are switched to other meal categories, such as from free to full price. Children can also be determined to be eligible for higher benefits, such as for free meals, rather than reduced-price meals. At the locations we visited, the verification process resulted in changes to the eligibility status for a number of children. For example, at one school food authority in a small town with about half of its children approved for free and reduced-price school meals, 65 of 2,728 approved applications were selected for verification, and 24 children were moved from the free meals category to either the reduced-price or full-priced meals categories, while 1 child was moved to the free category. At another school food authority in the urban fringe of a large city, with about 40 percent of its children approved for free and reduced-price school meals, 40 of about 1,100 approved applications were selected for verification and 8 children were moved to higher-priced meal categories. According to program officials, some children initially determined to be ineligible for free or reduced-price meals are later found to be eligible when they reapply and provide the needed documents. We did not determine whether any of the children were subsequently reinstated to their pre-verification status. The accuracy of the numbers of children who are approved for free and reduced-price meals affects not only the school meals program but also other federal and state programs. A USDA report, based on the agency’s data on the number of children approved for free meals and data from the U. S. Bureau of Census, indicates that about 27 percent more children are approved for free meals than are income-eligible. As such, the federal reimbursements for the school meals program may not be proper. Furthermore, some other programs that serve children in poverty distribute funds or resources based on the number of children approved to receive free or reduced-price meals. For example, in school year 1999-2000 nine states used free and reduced-price meals data to distribute Title I funds to their small districts (those serving areas with fewer than 20,000 total residents). In addition, districts typically use free and reduced- price meals data to distribute Title I funds among schools. At the state level, some state programs also rely on free and reduced-price lunch data. For example, Minnesota distributed about $7 million in 2002 for a first grade preparedness program based on these data. As of July 2002, FNS had three pilot projects underway for improving the application and verification processes. These projects are designed to assess the value of (1) requesting income documentation and performing verification at the time of application, (2) verifying additional sampled applications if a specified rate of ineligible children are identified in the original verification sample, and (3) verifying the eligibility of children who were approved for free school meals based on information provided by program officials on household participation in the Food Stamp, Temporary Assistance for Needy Families, or Food Distribution on Indian Reservations programs, a process known as direct certification. FNS plans to report on these projects in 2003. According to officials from three organizations that track food and nutrition issues, the American School Food Service Association, the Center on Budget and Policy Priorities, and the Food Research and Action Center, requesting income documentation at the time of application would likely add to application process costs and may create a barrier for eligible households. Having to provide such additional information can complicate the school meals application process and may cause some eligible households not to apply. In 1986, we reported this method as an option for reducing participation of ineligible children in free and reduced-price meal programs, but recognized that it could increase schools’ administrative costs, place an administrative burden on some applicants, or pose a barrier to potential applicants. For the 2000-01 school year, costs for meal counting and claiming reimbursement at the federal and state levels were much less than 1 cent per program dollar. The median cost was nearly 7 cents at the local level and was the highest cost. (See table 8.) The federal and state level costs were incurred for providing oversight and administering funds for reimbursement throughout the school year. Similarly, costs at the local level were incurred throughout the school year because the related duties, which apply to all reimbursable meals, were performed regularly. A number of factors come into play at the local level that could affect costs; however, except in a few instances, we could not identify any clear pattern as to how these factors affected meal counting and reimbursement claiming costs. At the federal and state levels, the costs associated with the meal counting and reimbursement claiming processes were much less than 1 cent per program dollar. FNS headquarters estimated that the costs associated with its meal counting and reimbursement claiming tasks were $254,000, and the costs of one FNS region were estimated at $93,000 in school year 2000-01. In comparison, FNS administered $8 billion and the region administered $881 million in the school meals program. FNS’s costs for meal counting and claiming reimbursement were less than their costs for application processing and verification tasks. FNS’s meal counting and reimbursement costs were primarily incurred for providing technical assistance, guidance, monitoring, and distributing federal funds to state agencies that administer school food programs. FNS distributes these funds through the regional offices, with the regions overseeing state and local agencies and providing guidance and training. Prior to the beginning of the fiscal year, FNS reviewed meal reimbursement requests from the prior year to project funding needs for each state. FNS awarded grants and provided letters of credit to states. Each month, states obtained reimbursement payments via the letters-of- credit, and FNS reviewed reports from states showing the claims submitted. At the end of the year, FNS closed out the grants and reconciled claims submitted with letter-of-credit payments. In addition to these tasks, FNS issued guidance, provided training, and responded to inquiries. Also, FNS regional staff conducted financial reviews of state agencies, such as reviews of reimbursement claim management, and assisted state agencies during reviews of school food authorities. For the five states, the cost per program dollar was also considerably less than 1 cent for the 2000-01 school year. The state agencies’ cost estimates ranged from $51,000 to $1 million, with the size of their programs ranging from $122 million to $1.1 billion. In all five states, the costs for meal counting and reimbursement tasks exceeded the costs for verification activities. In four of the five states, these costs were less than the costs for application activities. State agencies are responsible for operating a system to reimburse school food authorities for the meals served to children. Of the five state agencies in our sample, four had systems that allowed school food authorities to submit their monthly claims electronically, although one state agency’s system began operating in the middle of the 2000-01 school year. The other state agency received claims from its school food authorities through conventional mail services. The state agencies reviewed claims and approved payments as appropriate and conducted periodic reviews of school food authority meal counting and reimbursement activities. The median cost for meal counting and reimbursement claiming at the local level—school food authorities and schools—was about 7 cents per program dollar and ranged from 2 cents to 14 cents. The estimated meal counting and reimbursement claiming costs at the 10 selected school food authorities ranged from $2,461 to $318,436, and ranged from $1,892 to $36,986 for the 20 schools. Schools usually had a higher share of the cost per program dollar than their respective school food authorities; 18 of the 20 schools reviewed incurred more than half the cost per program dollar, with 14 schools incurring more than 75 percent. For example, one school’s costs were $19,000--about 90 percent of the combined school and school food authority costs. Table 9 lists the estimated costs for meal counting and obtaining reimbursement, program dollars, and cost per program dollar for each of the school food authorities and schools included in our review. The local level costs were much higher than the costs for application processing and verification because the duties were performed frequently throughout the school year, and costs were incurred for all reimbursable meals served under the program. As such, these costs do not reflect separate costs by type of meal served. At the schools, each meal was counted when served, the number of meals served were tallied each day, and a summary of the meals served was sent periodically to the school food authority. The school food authorities received and reviewed reports from its schools at regular intervals, including ensuring that meal counts were within limits based on enrollment, attendance, and the number of children eligible for free, reduced price and paid lunches. On the basis of these data, the school food authorities submitted claims for reimbursement to the state agency each month of the school year. Program officials noted that even without the federal requirement for meal counting by reimbursement category, schools would still incur some meal counting costs in order to account for the meals served. Most of the costs at the local level were for the labor to complete meal counting and claiming tasks. Those school food authorities with electronic meal counting systems reported substantial costs related to purchasing, maintaining, and operating meal counting computer systems and software. In addition to the frequency, another reason for the higher cost is that, unlike application and verification, meal counting and claiming reimbursement pertains to all reimbursable meals served—free, reduced- price and full price. For example, during school year 2000-01, FNS provided reimbursement for over 2 billion free lunches, about 400 million reduced-price lunches and about 2 billion full-price lunches. Costs for meal counting and reimbursement claiming varied considerably at the local level—school food authorities and schools combined. The costs per program dollar ranged from 2 cents to 14 cents, compared with the costs per program dollar for the other processes, which were much more consistent—from about half a cent to 3 cents for the application process and from less than 1 cent to 1 cent for the verification process. Various factors may contribute to this range of costs at the local level. For example, larger enrollments may allow economies of scale that lower the cost of food service operations. Use of an electronic meal counting system, as opposed to a manual system, has the potential to affect meal counting costs, since electronic systems require the purchase of equipment, software, and ongoing maintenance. Food service procedures may also have a bearing on costs, such as the number and pay levels of cashiers and other staff performing meal counting and reimbursement claiming tasks. The interaction of these factors and our limited number of selected sites prevents a clear explanation for the differences in estimated costs per program dollar incurred at the selected locations reviewed, except in a few instances. For example, at the local level, the school with the highest combined meal counting cost per program dollar for the school and its share of the school food authority costs (14 cents) had an enrollment of 636 children, relatively few of its children approved for free and reduced- price meals (14 percent), and a manual meal counting system. The school with the lowest combined meal counting cost (2 cents per program dollar) had about twice the enrollment, 96 percent of its children approved for free and reduced-price meals, and an electronic meal counting system. Both schools were elementary schools in mid-size city locales. For the remaining 18 schools in the sample, we saw no distinct relationship between cost and these factors. We provided a draft of this report to USDA’s Food and Nutrition Service for review and comment. We met with agency officials to discuss the report. These officials stated that written comments would not be provided. However, they provided technical comments that we incorporated where appropriate. We are sending copies of this report to the Secretary of Agriculture, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions concerning this report, please call me on (202) 512-7215. Key contacts and staff acknowledgments are listed in appendix III. This appendix discusses cost estimates for the application, verification, and meal counting and reimbursement claiming processes. The scope of our review included the National School Lunch Program and the School Breakfast Program as they relate to public schools. To the extent that we could, we excluded from our analyses other federal child nutrition programs and nonprofit private schools and residential child care institutions, which also participate in the school meals programs. Our review included the paper application process and did not include the direct certification of children for free and reduced-price school meals. Our focus was school year 2000-01, the most recent year for which data were available. The data we collected relate to that year. National data on the costs of the application, verification, and meal counting and reimbursement claiming processes are not available for the federal, state, or local levels, since these costs are not tracked separately. Therefore, we developed estimates of these costs on the basis of cost information provided by program managers and staff. To obtain data on the costs related to applying for free and reduced-price school meals, verifying approved applications, and counting meals and claiming reimbursements, we visited selected locations, including 5 state agencies, 10 school food authorities in public school districts, and 2 schools at each district. We chose sites that would provide a range of characteristics, such as geographical location, the size of student enrollment, the rate of children approved for free and reduced-price meals, and the type of meal counting system. We selected districts with schools that were located in rural areas, small towns, mid-size central cities, urban fringe areas of mid-size and large cities, and large central cities based on locale categories assigned to their respective districts by the National Center for Education Statistics. To include districts of various sizes in our study, we selected 2 districts in each selected state—1 with enrollment over 10,000 and 1 with enrollment under 10,000, except in Ohio. In Ohio, we selected 2 districts with enrollments of less than 5,000, since almost 90 percent of the public school districts nationwide have enrollments under that amount. We also selected districts with rates of children approved for free and reduced-price meals that ranged from 16.7 to 74.5 percent and schools with rates that ranged from 10.5 to 96.5 percent. We worked with state and school food authority officials at our selected districts to select a mix of schools that had either manual or electronic meal counting systems. Electronic meal counting systems were used at 9 selected schools. We also obtained information from officials at the Food and Nutrition Service’s (FNS) headquarters and one regional office. We selected one regional office that, according to FNS officials, had the best data available to develop estimates for the application, verification, and meal counting processes. We developed interview guides to use at selected sites. We also met with FNS and professional association officials to obtain their comments on these interview guides, and we revised them where appropriate. Using these guides, we interviewed program managers and staff at the selected locations to obtain information on tasks associated with the application, verification, and meal counting and reimbursement claiming processes for the 2000-01 school year. We obtained estimated labor and benefit costs associated with these tasks. We also obtained other estimated nonlabor costs such as those for translating, copying, printing, mailing, data processing, travel, hardware, software, and automated systems development costs. On the basis of this information, we calculated estimated costs associated with each process, that is, application, verification, and meal counting and reimbursement claiming. Using our cost estimates, we calculated costs relative to program dollars. Program dollars at the federal level for both FNS headquarters and the one region included the value of reimbursements for school meals and commodities, both entitlement and bonus, for public and nonprofit private schools and residential child care institutions because FNS was not able to provide program dollars specific to public schools. However, according to FNS officials, reimbursements and commodities provided to public schools make up the vast majority of these dollars. Program dollars at the state level included this federal funding specific to public school districts for school meals and state school meal funding. Information specific to public school districts is available at the state level because claims are made separately by each school food authority. At the local level, program dollars included the amounts children paid for the meals as well as federal and state funding. Since some school food authorities could not provide the dollar value of commodities used at selected schools, we assigned a dollar value of commodities to each of these schools based on their proportion of federal reimbursements. We included federal and state program funding and the amounts children paid for the meals because these are the revenues related to the sale of reimbursable meals. Because the definition of program dollars differed by level, we were unable to total the costs for the three levels—federal, state, and local. However, since the definition of program dollars was the same for school food authorities and schools, we were able to calculate the cost per program dollar at the local level for each school. To calculate these costs we: (1) divided the school program dollars by the school food authority program dollars; (2) multiplied the resulting amount by the total school food authority costs for each process—application, verification, and meal counting and reimbursement claiming—to determine the portion of the costs for each process at the school food authority that was attributable to each selected school; (3) added these costs to the total costs for each of the schools; and (4) divided the resulting total amount by the program dollars for each selected school to arrive at the cost per program dollar at the local level for each school. We calculated a median cost per program dollar for school food authorities and schools separately for each process—application, verification, and meal counting and reimbursement claiming. We also calculated a median cost for each process for school food authorities and schools combined to arrive at local level medians for each process. The cost estimates do not include indirect costs. For 2 of the 10 school food authorities, indirect rates were not available and in other cases, the rates varied significantly due to differing financial management and accounting policies. Also, for 2 of the 10 school food authorities, including indirect rate calculations could have resulted in some costs being double counted because during our interviews with staff, they provided estimates for many of the tasks that would have been included in the indirect rates. Depreciation costs for equipment, such as computer hardware and software, were generally not calculated nor maintained by states and school food authorities. Therefore, we obtained the costs for equipment purchased in the year under review. We did not obtain costs for equipment at the federal level because these costs could not be reasonably estimated, since equipment was used for purposes beyond the processes under review. We obtained information on the verification pilot projects from FNS officials. We also obtained information from the American School Food Service Association, the Center on Budget and Policy Priorities, and the Food Research and Action Center on several options related to the program, one of which was the same as one of the pilot projects. We did not verify the information collected for this study. However, we made follow-up calls in cases where data were missing or appeared unusual. The results of our study cannot be generalized to schools, school food authorities, or states nationwide. Program dollars include cash reimbursements and commodities (bonus and entitlement) at the federal level, the amounts provided to school food authorities for these programs at the state level, and the amounts students paid for their meals at the local level. In addition to the individuals named above, Peter M. Bramble, Robert Miller, Sheila Nicholson, Thomas E. Slomba, Luann Moy, and Stanley G. Stenersen made key contributions to this report.
Each school day, millions of children receive meals and snacks provided through the National School Lunch and National School Breakfast Programs. Any child at a participating school may purchase a meal through these school meal programs, and children from households that apply and meet established income guidelines can receive these meals free or at a reduced price. The federal government reimburses the states, which in turn reimburse school food authorities for each meal served. During fiscal year 2001, the federal government spent $8 billion in reimbursements for school meals. The Department of Agriculture's Food and Nutrition Service, state agencies, and school food authorities all play a role in these school meal programs. GAO reported that costs for the application, verification, and meal counting and reimbursement processes for the school meal programs were incurred mainly at the local level. Estimated federal and state-level costs during school year 2000-2001 for these three processes were generally much less than 1 cent per program dollar administered. At the local level--selected schools and the related school food authorities--the median estimated cost for these processes was 8 cents per program dollar and ranged from 3 cents to 16 cents per program dollar. The largest costs at the local level were for counting meals and submitting claims for reimbursement. Estimated costs related to the application process were the next largest, and estimated verification process costs were the lowest of the three.
TSA inspects airports, air carriers, and other regulated entities to ensure that they are in compliance with federal aviation security regulations, TSA-approved airport security programs, and other requirements, including requirements related to controlling airport employee access to secure areas of an airport. Airport operators have direct responsibility for implementing security requirements in accordance with their TSA- approved airport security programs. In general, secure areas of an airport are specified in the airport operator’s security programs and include the sterile area, which is the area of an airport that provides passengers access to boarding aircraft and to which access is generally controlled by TSA or a private screening entity under TSA oversight, and the security identification display area (SIDA), which is a portion of an airport in which security measures are carried out and where appropriate identification must be worn by aviation workers. For example, aviation workers that require access to the aircraft movement and parking areas for the purposes of their employment duties must display appropriate identification to access these areas. Airport operators are to perform background checks on individuals prior to granting them unescorted access to secure areas of an airport and TSA relies on airport operators to collect and verify applicant data, such as name, place of birth, and country of citizenship, for individuals seeking credentials. Background checks for individuals applying for credentials to allow unescorted access to secure areas of commercial airports include (1) a security threat assessment from TSA, including a terrorism check; (2) a fingerprint-based criminal history records check; and (3) evidence that the applicant is authorized to work in the United States. The criminal history records check also determines whether the applicant has committed a disqualifying criminal offense in the previous ten years. TSA and airport operators have oversight responsibilities for the identification badges that are issued. For example, airport operators must account for all badges through control procedures, such as audits, specified in TSA’s security directives and in an airport’s security program. TSA assesses airports’ compliance with its security directives and federal regulations through inspections conducted in alternating years of, among other things, the airport operator’s documents related to issuing and controlling identification badges and by randomly screening aviation workers. The Transportation Security Administration (TSA) has generally made progress addressing the 69 applicable requirements within the Aviation Security Act of 2016 (2016 ASA). As of June 2017, TSA officials stated it had implemented 48 of the requirements; it plans no further action on these. For 18 requirements, TSA officials took initial actions and plans further action. TSA officials stated they have yet to take action on 2 requirements and plan to address them in the near future. TSA officials took no action on 1 requirement regarding access control rules because it plans to address this through mechanisms other than formal rulemaking, such as drafting a national amendment to airport operator security programs. Appendix I presents the details of each requirement, the progress made by TSA, and the status of TSA’s plans for further actions. A summary of TSA’s progress in implementing the requirements in each section of the Act is presented below. Conduct a Threat Assessment (Section 3402) TSA made progress on the 11 requirements in Section 3402 of the 2016 ASA. TSA plans no further action for 9 requirements and plans further action for 2 requirements, as shown in appendix I. For example, section 3402(a) requires TSA to conduct a threat assessment that considers the seven factors stated in the law and 3402(b) requires TSA to submit a report to the appropriate congressional committees on the results of the assessment. Consistent with these sections, TSA conducted a threat assessment on the level of risk individuals with unescorted access to the secure area of an airport pose to the domestic air transportation system and submitted a report on it to the appropriate congressional committees in May 2017. In conducting the threat assessment, TSA also considered all seven required factors. For example, TSA considered recent security breaches at domestic and foreign airports by analyzing access-control related incidents from December 2013 through February 2017. TSA also considered the vulnerabilities associated with unescorted access authority granted to foreign airport operators and air carriers, and their workers, by reviewing the vulnerability of incoming flights to the United States for four international regions. The threat assessment noted several recommendations under consideration, such as enhancing relationships with the FBI and U.S. Drug Enforcement Administration, among other law enforcement entities, to ensure TSA is more fully aware of insider threats within the domestic transportation system. TSA officials stated they plan to use the threat assessment to, among other things, expand the use of vulnerability assessments and insider threat-related inspections at all commercial airports. Thus, TSA plans no further action for the 9 requirements related to the threat assessment. Enhance Oversight Activities (Section 3403) TSA generally made progress in addressing the 10 requirements in Section 3403 of the 2016 ASA. Of the 10 requirements, TSA plans no further action for 4 requirements. TSA plans further action for 5 requirements, but has yet to begin implementing 1 of these 5 requirements. In addition, TSA took no action on one requirement because they plan to address this requirement through other means, as shown in appendix I. Section 3403(a) requires TSA to update rules on access controls, and as part of this update, to consider, among other things best practices for airport operators that report missing more than three percent of credentials for unescorted access to the SIDA of any airport. In accordance with this requirement, TSA developed a list of measures for airport operators to perform—such as an airport rebadging if the percent of unaccounted for badges exceeds a certain threshold— and published them on DHS’s Homeland Security Information Network (HSIN) for airport operators to access. In addition, TSA officials stated they developed a fine structure for non-category X airport operators that have more than five percent and for category X airports that have more than three percent of credentials missing for unescorted access to the SIDA of an airport. TSA plans to take additional action to address this and other requirements related to updating the rules on access controls. For example, it plans to propose a national amendment to airport operator security programs for airport operators to report to TSA when an airport exceeds a specified threshold for unaccounted identification badges. TSA plans no further action under section 3403(a)(2)(F) to consider a method of termination by the employer of any airport worker who fails to report in a timely manner missing credentials for unescorted access to any SIDA of an airport. TSA officials stated they considered developing such a method; however, they plan no further action because TSA does not have authority over employment determinations made by airport operators or other employers. Further, section 3403(b) stated that TSA may encourage the issuance of temporary credentials by airports and aircraft operators of free one-time, 24-hour temporary credentials for aviation workers who report their credentials as missing but not permanently lost. Officials stated they plan no further action on this requirement because temporary credentials conflict with a current federal regulation that requires airport operators to ensure that only one identification badge is issued to an individual at one time. TSA has yet to take action on one requirement and took no action on another requirement in section 3403(a) of the 2016 ASA. First, TSA stated they plan to consider section 3403(a)(2)(E) to increase fines and direct enforcement action for airport workers and their employers who fail to timely report missing credentials, but have yet to do so. In addition, TSA took no action to update the rules on access controls. TSA officials stated that that they are taking other actions, such as drafting a proposed national amendment to airport security programs, to address this requirement. Update Airport Employee Credential Guidance (Section 3404) TSA generally made progress in addressing the 4 requirements in Section 3404 of the 2016 ASA. Of the 4 requirements, TSA took action and plans no further action for 2 requirements, plans further actions for 1 requirement, and has yet to take action on 1 requirement. For example, section 3404(a) requires TSA to issue guidance to airport operators regarding the placement of an expiration date on airport identification badges issued to non-U.S. citizens that is not longer than the period of time during which such non-U.S. citizens are lawfully authorized to work in the United States. In accordance with this requirement, TSA issued guidance that states that airport operators should match an identification badge’s expiration date to an individual’s immigration status, published this guidance to airport operators in fiscal year 2016 on the HSIN, and plans to issue a security directive to further address this requirement. TSA has no plans for further action to address section 3404(b)(1), which requires TSA to issue guidance for its inspectors to annually review the procedures of airport operators and carriers for individuals seeking unescorted access to the SIDA and to make information on identifying suspicious or fraudulent identification materials available to airport operators and air carriers. For example, TSA officials stated that Transportation Security Inspector guidance is updated yearly to incorporate additional inspection guidelines, as is TSA’s Compliance Manual, which includes updated methods for inspections and additional airport access control measures to be tested. The update for fiscal year 2017 changed the number of required tests related to insider threats and has new inspection techniques related to individuals seeking unescorted access to the SIDA. Additionally, officials stated that TSA made information available on the HSIN on identifying fraudulent documentation. TSA officials have yet to take action on section 3404(b)(2), which requires that the guidance to airport operators regarding the placement of an expiration date on airport identification badges issued to non-U.S. citizens include a comprehensive review background checks and employment authorization documents issues by the United States Citizenship and Immigration Services. Officials stated that it plans to request clarification from the appropriate congressional committees to determine the actions needed to implement this requirement. Vet Airport Employees (Section 3405) TSA made progress on the 12 requirements in Section 3405 of the 2016 ASA. TSA plans no further action for 5 requirements, and plans further action for 7 requirements, as shown in appendix I. For example, section 3405(a) requires TSA to revise certain regulations related to the eligibility requirements and disqualifying criminal offenses for individuals seeking unescorted access to any SIDA of an airport. In accordance with this requirement, TSA is drafting a Notice of Proposed Rulemaking to update rules related to vetting of employees seeking unescorted access to the SIDA of an airport; however, TSA officials reported two challenges in implementation. First, TSA officials stated they cannot update the employee eligibility requirements and disqualifying criminal offense regulations within the required 180 days specified in the statute because the required process for promulgating regulations generally takes longer than 180 days. Second, per Executive Order 13771, federal agencies must identify two existing regulations to be repealed for every new regulation issued during fiscal year 2017, and the order further provides that for each new regulation, the head of the agency is required to identify offsetting regulations and provide the agency’s best approximation of the total costs or savings associated with each new regulation or repealed regulation. Despite these challenges, TSA officials stated they plan further actions to update rules related to employee vetting in accordance with this section; however, officials could not provide a timeframe for completing this requirement. In addition, TSA officials stated they plan no further action with respect to section 3405(b)(1), which requires TSA and the FBI to implement the Rap Back Service for recurrent vetting of aviation workers. In response to this requirement, TSA coordinated with the FBI to implement the FBI’s Rap Back Service, which uses the FBI fingerprint-based criminal records repository to provide recurrent fingerprint-based criminal history record checks for aviation workers who have been initially vetted and already received airport-issued identification badge credentials. TSA officials stated the Rap Back program is available to all commercial airport operators; however, for airport operators to participate in the Rap Back program, the airport operator must, among other things, sign a memorandum of understanding with TSA that documents its participation in the program. As of June 2017, TSA had executed over 100 memoranda of understanding with airport operators, including 17 category X airports and plans to enroll additional airports in fiscal year 2017. Develop and Implement Access Control Metrics (Section 3406) TSA made progress by taking action on the 6 requirements in Section 3406 of the 2016 ASA. Of the 6 requirements, TSA officials stated they plan no further action to implement the requirements of this section, as shown in appendix I. For example, section 3406 requires TSA to develop and implement performance metrics to measure the effectiveness of security for the SIDAs of airports and, in developing these metrics, TSA may consider 5 factors stated in the Act. In accordance with this requirement, TSA developed and implemented a metric that determines the percentage of TSA SIDA inspections that were found to be in compliance with the airport security program. TSA officials stated they plan to use the metric to inform decision makers on the SIDA compliance for individual airports and nationwide. For example, if TSA determines an individual airport has a low compliance rate, TSA leadership may conduct additional special emphasis inspections to address the issue, according to TSA officials. Develop a Tool for Unescorted Access Security (Section 3407) TSA made progress on the 18 requirements in Section 3407 of the 2016 ASA. Of the 18 requirements, TSA plans no further action on 17 requirements, and further action for 1 requirement, as shown in appendix I. For example, section 3407(a) requires TSA to develop a model and best practices for unescorted access security that includes 5 requirements as stated in the Act. In accordance with this requirement, TSA officials stated they utilized a tool for unescorted access security called the Advanced Threat Local Allocation Strategy (ATLAS) tool, which was developed in 2015 and is designed to randomly screen aviation workers who have unescorted access to restricted areas of an airport. The tool incorporates the required elements listed in section 3407(a) such as using intelligence, scientific algorithms and other risk-based factors, according to TSA officials. For example, TSA officials stated the algorithm in the tool provides a scientific way to randomize the locations, times, and types of screening an aviation worker might receive. It allows TSA to limit an individual’s ability to circumvent screening by deploying resources in a way that an individual who enters an access point will not know if, or what type of screening will take place, according to officials. While officials stated they plan no further actions to implement the requirements in section 3407(a) to develop a model, officials stated they had conducted pilot assessments of the ATLAS tool in fiscal year 2015 at three airports, at one airport in fiscal year 2016, and plan to pilot the tool in additional airports before expanding its use in phases to all airports by fiscal year 2018, according to TSA officials. Increase Covert Testing (Section 3408) TSA made progress on the 2 requirements in Section 3408 of the 2016 ASA. Of the 2 requirements, TSA plans no further action for 1 requirement, and plans to take further action for 1 requirement, as shown in appendix I. For example, TSA plans further actions to increase the use of covert testing in fiscal year 2017 in accordance with section 3408(a), which requires TSA to increase the use of red-team, covert testing of access controls to any secure areas. Specifically, TSA conducted one access control covert project in fiscal year 2016 and plans to increase the number of projects to three in fiscal year 2017. Additionally, TSA submitted a report on access control covert testing to the appropriate congressional committees as required by section 3408(c)(1) of the 2016 ASA, describing the steps TSA plans to take to expand the use of access control covert testing, and TSA plans no further action to address this reporting requirement. Review Security Directives (Section 3409) TSA made progress on the 6 requirements in Section 3409 of the 2016 ASA. Of the 6 requirements, TSA plans no further action on 4 requirements, and plans further action on 2 requirements, as shown in appendix I. Section 3409(a) requires TSA to conduct a comprehensive review of every current security directive addressed to any regulated entity. Section 3409(b) requires TSA to submit notice to the appropriate congressional committees for each new security directive TSA issues. TSA officials stated they have a process in place to review current security directives. For example, officials stated that they review all current security directives on at least an annual basis, through working groups of TSA and industry association officials. TSA stated these working groups consider, among other things, security directives within airport security programs and the need for revocation or revision of current security directives and TSA plans no further action to address this requirement. With respect to the issuance of new security directives, TSA officials stated they provide briefings for relevant congressional committees as requested regarding the issuance of a new security directive and the rationale for issuing it. According to officials, further action is planned to address these requirements for new security directives. While TSA officials stated that it is too early to measure the effectiveness of the applicable requirements of the 2016 ASA, they stated that implementing these requirements would broadly have an effect on improving aviation security and identified two requirements that, when implemented, may specifically reduce access control vulnerabilities. First, in accordance with section 3404(a) of the Act, TSA plans to issue a security directive to require airport operators to match the expiration date of an identification badge of an aviation worker that possesses a temporary immigration status with the individual’s U.S. work authorization expiration date. TSA officials stated that this measure may help prevent workers who are no longer authorized to work in the United States from inappropriately gaining access to airport SIDAs because an expired identification badge will prevent entry into the SIDA. Second, in accordance with section 3405(b) of the Act, TSA coordinated with the FBI to implement the Rap Back Service for airport operators to recurrently vet aviation workers in October 2016. The Rap Back service uses the FBI fingerprint-based criminal records repository to provide recurrent fingerprint-based criminal history record checks for aviation workers who have been initially vetted and already received airport- issued identification badge credentials. As of June 2017, TSA executed memorandums of understanding with 105 airport operators, including 17 category X airports, and 1 airline, to complete the Rap Back enrollment process. TSA officials stated that implementing the requirement to recurrently vet aviation workers may also reduce vulnerabilities associated with the insider threat. For example, they stated that continuous vetting would increase the potential for TSA and airport operators to be aware of aviation workers who had engaged in potentially disqualifying criminal activity yet continued to hold active identification badges granting access to airport SIDAs. We provided a draft of this product to the Secretary of Homeland Security for comment. In its formal comments, which are reproduced in full in Appendix II, DHS stated that TSA continues to implement the 2016 ASA requirements. TSA provided technical comments on a draft of this report which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Jennifer Grover at (202) 512-7141 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are found in Appendix III. Subtitle D of the Aviation Security Act of 2016 (ASA), enacted on July 15, 2016, requires the Transportation Security Administration (TSA) to take actions in eight categories that have a total of 69 applicable requirements. The Transportation Security Administration (TSA) has generally made progress addressing the 69 applicable requirements within the 2016 ASA. As of June 2017, TSA officials stated it had implemented 48 of the requirements and plans no further action on these. For 18 requirements, TSA officials took initial actions and plans further action. TSA officials stated they have yet to take action on 2 requirements and plan to address them in the near future. TSA officials took no action on 1 requirement regarding access control rules because it plans to address this through mechanisms other than formal rulemaking, such as drafting a national amendment to airport operator security programs. Because many of TSA’s actions taken in response to the 2016 ASA were recently implemented or are still ongoing and not fully implemented, we did not assess the effectiveness of the actions taken by TSA. The tables below present the details of each requirement, the progress made by TSA, and the status of TSA’s plans for further action. Section 3402 of the 2016 ASA requires TSA to, among other things, conduct a threat assessment and submit a report regarding the threat assessment to the appropriate congressional committees. TSA made progress in implementing these requirements and has plans for further action, as shown in table 1. Section 3403 of the 2016 ASA required TSA to take actions related to enhancing its oversight activities of aviation workers. TSA made progress in implementing the requirements and has further actions planned, as shown in table 2. Section 3404 of the 2016 ASA requires TSA to take action on actions related to updating employee credential guidance. TSA made progress in implementing the requirements in this section and plans further actions, as shown in table 3. Section 3405 of the 2016 ASA required TSA to take action on requirements related to vetting aviation workers. TSA made progress in implementing these requirements and plans further action on certain requirements, as shown in table 4. Section 3406 of the 2016 ASA required TSA to, among other things, develop and implement performance metrics to measure the effectiveness of security for Security Identification Display Areas of airports. TSA made progress in implementing these requirements and plans no further action on all requirements, as shown in table 5. Section 3407 of the 2016 ASA requires TSA to, among other things, develop a model and best practices for unescorted access security. TSA made progress in implementing these requirements and plans further action on certain requirements, as shown in table 6. Section 3408 of the 2016 ASA requires TSA to, among other things, increase the use of red-team covert testing of access controls to any secure areas of an airport. TSA made progress in implementing these requirements and plans further action on certain requirements, as shown in table 7. Section 3409 of the 2016 ASA requires TSA to, among other things, review all security directives to determine if they remain relevant, and report to Congress on security directives. TSA made progress in implementing these requirements and plans further action on certain requirements, as shown in table 8. In addition to the contact named above, Kevin Heinz (Assistant Director), Brandon Jones (Analyst-in-Charge), Michele Fejfar, Tyler Kent, Thomas Lombardi, Heidi Nielson, and Claire Peachey made significant contributions to the report.
Recent incidents involving aviation workers conducting criminal activity in the nation's commercial airports have led to interest in the measures TSA and airport operators use to control access to secure areas of airports. The 2016 ASA required TSA to take several actions related to oversight of access control security at airports. The Act also contains a provision for GAO to report on progress made by TSA. This report examines, among other issues, progress TSA has made in addressing the applicable requirements of the 2016 ASA. GAO compared information obtained from TSA policies, reports, and interviews with TSA officials to the requirements in the 2016 ASA. GAO also visited three airports to observe their use of access controls and interviewed TSA personnel. The non-generalizable group of airports was selected to reflect different types of access control measures and airport categories. GAO is not making any recommendations. In its formal response, DHS stated that it continues to implement the 2016 ASA requirements. The Transportation Security Administration (TSA) has generally made progress addressing the 69 applicable requirements within the Aviation Security Act of 2016 (2016 ASA). As of June 2017, TSA had implemented 48 of the requirements; it plans no further action on these. For 18 requirements, TSA officials took initial actions and plans further action. TSA officials stated they have yet to take action on 2 requirements and plan to address them in the near future. TSA officials took no action on 1 requirement regarding access control rules because it plans to address this through mechanisms other than formal rulemaking, such as drafting a national amendment to airport operator security programs. Key examples of TSA's progress in implementing the requirements in the eight relevant sections of the Act are shown below: Conduct a Threat Assessment : TSA conducted a threat assessment that analyzed vulnerabilities related to the insider threat—that is, the threat posed by aviation workers who exploit their access privileges to secure areas of an airport for personal gain or to inflict damage. Enhance Oversight Activities : Among other things, TSA developed a list of measures for airport operators to perform, such as an airport rebadging if the percent of badges unaccounted for exceeds a certain threshold. Update Airport Employee Credential Guidance : TSA issued guidance to airport operators to match the expiration date of a non-U.S. citizen aviation worker's identification badge to the individual's U.S. work authorization status. Vet Airport Employees : In addition to making progress on updating employee vetting rules, TSA coordinated with the Federal Bureau of Investigation (FBI) to implement the FBI's Rap Back service for providing recurrent fingerprint-based criminal history record checks for aviation workers. Develop and Implement Access Control Metrics: TSA developed and implemented a metric that determines the percentage of TSA secure area inspections found to be in compliance with the airport security program. Develop a Tool for Unescorted Access Security: According to TSA officials , they developed a tool designed to ensure that aviation workers with unescorted access are randomly screened for prohibited items, such as firearms and explosives, and to check for proper identification. Increase Covert Testing : TSA plans to increase the number of covert tests of access controls it will perform in 2017. Review Security Directives : Security directives are issued by TSA when, for example, additional measures are required to respond to a threat. TSA officials stated they review all security directives annually to consider the need for revocation or revision, and brief Congress when new directives are to be issued.
Each year, HUD helps hundreds of thousands of Americans finance home purchases by insuring their mortgage loans. HUD insures private lenders against losses on mortgages for single-family homes—which HUD defines as structures with one to four dwelling units—and plays a particularly large role in certain market segments, including low-income borrowers and first-time homebuyers. The loan amount that HUD can insure is based, in part, on the appraised value of the home. The primary role of appraisals in the loan underwriting process is to provide evidence that the collateral value of the property is sufficient to avoid losses on loans if the borrower is unable to repay the loan. If a borrower defaults and the lender subsequently forecloses on the loan, the lender can file an insurance claim with HUD for nearly all of its losses, including the unpaid balance of the loan. After the claim is paid, the lender transfers the title of the home to HUD, which is responsible for managing and selling the property. Most of the mortgages are insured by FHA under its Mutual Mortgage Insurance Fund. To cover claims for lenders’ losses, FHA deposits insurance premiums paid by borrowers into the fund, which, historically, has been self-sufficient. Figure 1 shows the role appraisals play as part of the home-buying process. As figure 1 indicates, the purpose of a HUD appraisal is to (1) determine the property’s eligibility for mortgage insurance on the basis of its condition and location and (2) estimate the value of the property for mortgage insurance purposes. In performing these tasks, the appraiser is required to identify any readily observable deficiencies impairing the safety, sanitation, structural soundness, and continued marketability of the property and to assess the property’s compliance with other minimum standards and requirements. HUD maintains a roster of appraisers who have satisfied the requirements to be certified to perform HUD appraisals. Lenders underwriting mortgages to be insured by HUD must select one of the approximately 26,000 appraisers listed on the appraiser roster to prepare an appraisal of the mortgaged property. In fiscal year 2003, appraisers listed on HUD’s roster performed 902,118 appraisals for the purposes of HUD mortgage insurance. HUD’s oversight of appraisers who appraise properties with mortgages insured by HUD is the responsibility of the Processing and Underwriting Divisions at the four homeownership centers (HOCs). The HOCs are located in Atlanta, Georgia; Denver, Colorado; Philadelphia, Pennsylvania; and Santa Ana, California. Figure 2 below shows the distribution of appraisers throughout the HOCs, each of which is responsible for a multistate region. The HOCs report directly to HUD’s Office of Single-Family Housing, which is responsible for implementing HUD’s home mortgage insurance programs and maintaining the appraiser roster. Since the creation of the HOCs in 1998, their role with respect to HUD’s appraiser monitoring strategy has evolved. From fiscal year 1998 through 2000, HUD instructed the HOCs to perform random field reviews for at least 10 percent of all loans insured by HUD. Starting in fiscal year 2000, HUD’s Real Estate Assessment Center assumed the responsibility of performing these field reviews for the HOCs and used an automated system—the Single Family Appraiser Subsystem—to review the quality of appraisals and identify those that were poorly prepared. However, HUD discovered that the majority of the appraisals identified as poor simply had documentation errors, and the system had failed to identify poorly performing appraisers who contributed to losses to the FHA mortgage insurance fund. In April 1999, we reported on HUD’s appraiser approval, monitoring, and enforcement efforts. We noted that HUD had limited assurance that the appraisers on its roster were knowledgeable about its appraisal requirements. We also reported that HUD was not doing a good job of monitoring the performance of appraisers and that HUD staff did not routinely visit appraised properties to determine the accuracy of field review contractors’ observations. In addition, we observed that HUD was not holding appraisers accountable for the quality of their appraisals and that HUD had not aggressively enforced its policy to hold lenders equally accountable with the appraisers they select for the accuracy and thoroughness of appraisals. HUD issued guidance and regulations in order to help ensure that appraisers it approves to perform appraisals under its Single-Family Mortgage Insurance programs are qualified to be placed on the appraiser roster. In 1999, the department issued guidance that required appraisers to, among other things, pass an examination on HUD appraisal methods and reporting. In 2003, HUD also issued regulations making changes to the licensing and certification requirements for the appraiser roster. Although HUD has strengthened its criteria for approving appraisers to perform appraisals, quality control over the approval process is limited. In November 1999, HUD issued new guidance under its Homebuyer Protection Plan—which was implemented in an attempt to increase the accuracy and thoroughness of HUD appraisals performed as part of the home-buying process—for placement and retention on the appraiser roster. As noted previously, lenders underwriting HUD loans must select appraisers from those listed on the roster to perform appraisals in connection with FHA-insured mortgages. Before 1999, HUD relied largely on the states’ licensing processes to ensure that appraisers were qualified to perform appraisals. However, the states’ minimum licensing standards did not include proficiency in HUD appraisal requirements. According to HUD’s new guidance, in order to be eligible for placement on the roster, an appraiser must (1) pass an examination on HUD appraisal methods and reporting; (2) be state licensed or state certified, with credentials based on the minimum criteria issued by the Appraiser Qualifications Board of the Appraisal Foundation; and (3) not be listed on the General Services Administration’s Suspension and Debarment List, HUD’s Limited Denial of Participation List, or HUD’s Credit Alert Interactive Voice Response System. In May 2003, HUD published a final rule making several additional changes to the licensing and certification requirements for the roster. Specifically: An appraiser who was included on the roster in June 2003, but did not meet the minimum Appraiser Qualifications Board licensing or certification criteria had 12 months to comply with these criteria and submit evidence of compliance to HUD. Failure to comply constituted cause for removal from the roster. An appraiser whose licensing or certification in a state has been revoked, suspended, or surrendered as a result of a state disciplinary action is automatically removed from the roster. An appraiser whose licensing or certification in a state has expired may not conduct HUD appraisals in that state. HUD does not formally recertify appraisers whose licenses or certifications have expired or have been revoked. Instead, these functions are performed at the state level, and HUD is notified electronically, through an interface with state appraiser regulatory systems, to ensure that appraisers have passed the appropriate exams and that licenses have been renewed and through daily e-mails from the Appraisal Subcommittee to learn when licenses have been revoked. HUD is seeking to establish an electronic connection to the Appraisal Subcommittee, which would enable automatic notification when appraisers are sanctioned by states and when appraisers’ licenses or certifications need to be renewed. While HUD strengthened the requirements for approving appraisers for placement on the appraiser roster, quality control over approval procedures is limited. According to HUD’s guidance on placing new appraisers on the roster, HUD valuation staff are supposed to verify eligibility by checking (1) the Appraisal Subcommittee’s National Registry to ensure that the applicant is listed, (2) the General Services Administration’s Excluded Parties List System and HUD’s Limited Denial of Participation List to ensure that the applicant is not listed, and (3) the Credit Alert Interactive Voice Response System to ensure that the applicant’s social security number is not associated with any defaults or delinquencies in other federal loan programs. We found that HUD’s quality control for approving appraisers for placement on the roster is limited. According to HUD officials, the employees responsible for appraiser approval check to ensure the applications include all relevant information, verify that applicants are eligible to participate in HUD programs, and enter applicants’ names into the Computerized Homes Underwriting Management System. They perform the eligibility verifications manually, checking the aforementioned registries and lists to ensure that the applicant is appropriately listed. HUD officials explained that they are developing a contract to establish a system that will track these verifications. In addition, while HUD officials indicated that they do perform quality control on the roster placement procedures, this quality control is limited. A HUD official conducts quality control reviews over a random sample of the approving employees’ work, but not on a routine basis. HUD does not document these quality control reviews and could not provide evidence that they were performed. HUD officials indicated that they are planning to develop and implement a quality control plan for the appraiser approval process. HUD uses a risk-based targeting approach to identify appraisers for review. HUD has shifted its focus from targeting appraisals to targeting appraisers, modifying its approach in an attempt to more effectively identify and monitor appraisers most associated with known risks to FHA’s mortgage insurance fund. The HOCs have generally reviewed the appraisers targeted during fiscal year 2003 and the first half of fiscal year 2004, but the reviews have not consistently met HUD’s criteria for completeness. In addition, HUD has not performed adequate oversight of contractors who conduct field reviews of appraisals. HUD’s process for monitoring appraisers is risk based. In 1999, we reported that HUD’s guidance called for random reviews of 10 percent of all appraisals. HUD modified this approach and now targets for review appraisers who are associated with known risks to FHA’s mortgage insurance fund, including those associated with a large number of defaulted loans, those who perform a large volume of appraisals, and those who appraise properties for loans with characteristics that are associated with high default and claim rates, including loans made under the 203(k) rehabilitation program, loans with nonprofit mortgagors, and loans for properties with multiple (three to four) units. HUD’s new approach is intended to identify poorly performing appraisers rather than poorly prepared appraisals. The goal of this approach is to remove from the appraiser roster appraisers who have not complied with HUD requirements and therefore pose a risk to FHA’s mortgage insurance fund, disqualifying them from doing business with HUD. The “early default rate” is the primary factor that HUD uses to identify poorly performing appraisers. On a quarterly basis, each HOC first identifies appraisers with the highest percentage of early defaults over the last 12-month period. Early defaults are defined as those that occur within 12 months of loan origination and represent a delinquency—which occurs when the borrower is unable to honor the mortgage obligation—of 90 days or greater. Each HOC next identifies, from the pool of appraisers associated with high early default rates, those appraisers who performed 10 or more appraisals and who performed appraisals for five or more defaulted mortgages. To do this, HUD uses its Neighborhood Watch Early Warning System—a Web-based software application that displays loan performance data for lenders and appraisers by loan types and geographic areas using FHA-insured single-family loan information—which was enhanced to include summary and loan level appraiser data to enable the targeting of appraisers for review. The system not only helps HUD target appraisers associated with a high rate of early defaults but also provides HUD the ability to identify and analyze patterns—by appraiser, geographic area, or originating lender—in loans that go into early default. According to HUD’s guidance, each HOC must develop a target list of appraisers to be reviewed based on the targeting criteria. They also must review at least 30 appraisers each quarter, but these do not necessarily need to be pulled from the target lists. In addition to selecting appraisers using the targeting criteria, the HOCs also may review appraisers for other reasons. For example, HOC officials informed us that they also include on the targeting lists appraisers who have recently been sanctioned and have completed their sanction period. The officials indicated that this helps them to ensure that recently sanctioned appraisers have corrected their relevant deficiencies and do not repeat past performance problems. However, because this targeting criterion is not required, there is no assurance that it will be used consistently. The HOCs may also review appraisers based on complaints from homebuyers and referrals from other HUD offices. To help identify appraisers to be placed on the target lists, HUD has recently implemented a statistical risk-based appraiser-sampling algorithm. This algorithm helps to identify appraisers for desk and field reviews, focusing on those who are more likely to be associated with adverse outcomes, including (1) early default of an FHA insured loan, (2) large dollar amount of claims on the FHA mortgage insurance fund, or (3) severity of the net dollar loss on the FHA mortgage insurance fund. The algorithm also incorporates risk factors statistically related to these adverse outcomes, including appraiser workload, performance in high-risk programs, and geographical area. According to HUD, this enhanced and automated targeting helps to ensure the efficient use of resources for field reviews. Because the HOCs do not maintain a permanent record of the data used to identify appraisers for review each quarter, we could not verify that the appraisers they placed on their target lists were actually those that met HUD’s criteria. The HOCs maintain general information about the reasons why appraisers are targeted for review, specifically labeling the reasons appraisers are targeted as “high default rate,” “high volume,” or high-risk loans or properties. However, they do not maintain specific early default rate information for the appraisers targeted, even though early default rate is the primary factor behind HUD’s targeting approach. The Neighborhood Watch Early Warning system allows HUD officials to maintain this information. For example, HUD uses this system to target lenders participating in its Single-Family Mortgage Insurance programs for review and maintains targeted lenders’ early default information. However, according to HOC officials, once the appraiser target lists are created, the HOCs do not maintain the targeted appraisers’ early default information. Without the specific default rate information, we were unable to determine whether the HOCs reviewed those targeted appraisers who posed the greatest risk based on high default rate. More importantly, in the absence of this information, HUD is unable to monitor the HOCs to ensure that the appropriate appraisers were targeted and reviewed based on its criteria and may be unable to determine the effectiveness of its targeting criteria in reducing risk to the mortgage insurance fund. Overall, the HOCs reviewed 730 (almost 78 percent) of the 936 appraisers who were placed on the target lists during fiscal year 2003 and the first half of fiscal year 2004. However, as shown in figure 3, the percentage varied among the HOCs. Each HOC exceeded the goal of reviewing 30 appraisers per quarter. Specifically, they reviewed a total of 2,055 appraisers over this period, or an average of more than 85 appraisers per HOC per quarter. (In addition to the 730 appraisers who were reviewed because they were on the target lists, the HOCs reviewed 1,325 appraisers who were not on the target lists but were reviewed based on other reasons, including complaints from homebuyers and referrals from other HUD offices, for a total of 2,055 appraisers reviewed.) HOC officials explained that they are not always able to conduct reviews of the appraisers within the quarter targeted because of resource constraints, but indicated that they eventually perform reviews of all targeted appraisers. HUD’s guidance calls for the HOCs to conduct desk reviews of 10 appraisals prepared by each appraiser identified for review through the targeting methodologies. The HOCs are to use a standard set of desk review criteria, the focus of which is to identify deficiencies in the content and format of the reported data. The appraisal report is to be analyzed for reasonable and logical conclusions of value to determine if the appraisal data are consistent with FHA requirements. However, the HOCs did not review every appraiser to the extent called for in the guidance. The HOCs performed, on average, about 5.6 desk reviews for each appraiser reviewed during fiscal year 2003 and the first half of 2004. As shown in figure 4, the Philadelphia HOC was the only HOC that conducted almost 10 desk reviews for each appraiser reviewed during this period. Officials from the other HOCs explained that an appraiser might have conducted fewer than 10 appraisals, and so the HOCs would be unable to perform the required number of desk reviews. However, as noted earlier, HUD’s targeting criteria provide that from the pool of appraisers associated with high early default rates, those appraisers performing 10 or more appraisals and with five or more defaulted cases should be targeted. HOC officials also told us that they attempt to perform desk reviews of appraisals that were conducted no more than one year prior to the time of the review and that, if possible, they try to perform these reviews on the appraisers’ 10 most recent appraisals. While this approach is not required, HOC officials explained that it helps them to ensure that an appraiser’s most recent work is being reviewed and that appraisals are not outdated at the time of review. According to HUD guidance, if a desk review concludes that an appraisal is inconsistent or unacceptable, then a field review is warranted on up to five appraisals prepared by that appraiser. HUD uses contractors and HUD employees who are qualified as appraisers to conduct field reviews. The review consists of a comprehensive inspection of the subject property’s interior and exterior, with the reviewer reporting any readily observable defective conditions (whereby the property does not meet minimum property standards as laid out in HUD’s guidance). The reviewer also must perform an exterior inspection of the comparable properties—other recently sold properties with similar features used to help the appraiser estimate the value of the subject property—submitted in the original appraisal and must verify all data reported by the original appraiser for the subject property and comparables. We found that HUD staff do not routinely visit appraised properties to verify the work of the field review contractors. According to HUD guidance, on-site monitoring reviews by HUD staff are essential for high-risk program participants to the extent practicable. HUD officials explained that they are constrained by limited travel resources and so are not able to make on-site visits to properties. HUD officials agreed contract oversight is important but indicated that it is often not cost efficient to send employees on site to review contractors’ work because many of the department’s contractors are responsible for reviewing only a few properties. However, HUD officials indicated that they are planning to develop a cost-efficient oversight mechanism. Expanded authority giving HOCs the ability to sanction appraisers has provided the HOCs with additional enforcement options. According to HUD officials, by expanding their ability to sanction appraisers and by focusing oversight on appraisers instead of appraisals, they are able to effectively and efficiently impose sanctions on appraisers. HUD reviews and quantifies appraisers’ work by using a Web-based tool, the Appraisal Review Process, a system that scores each appraiser on several appraisals, weighting the scores to capture violations that pose the greatest risk to FHA’s mortgage insurance fund. According to HUD, the system helps to make the process of sanctioning appraisers more consistent. In addition, HUD has issued a final rule to hold lenders accountable for poor appraisals. Lenders who submit appraisals that do not meet HUD requirements are now subject to the imposition of sanctions by the department. In 2000, a HUD regulation expanded its ability to sanction appraisers at the national level by giving the HOCs the authority to remove appraisers from the roster. As figure 5 illustrates, for the 1,004 appraisers field reviewed by HUD in fiscal year 2003 and the first half of fiscal year 2004, 620 sanctions were imposed, with 180 appraisers having been removed from the appraiser roster. Prior to receiving this expanded authority, the only enforcement tool at the HOCs’ disposal was issuance of limited denials of participation. However, the HOCs needed to refer limited denials of participation to headquarters, and the sanctions were only effective in the particular HOC’s jurisdiction for a year. Currently, the sanctions available to the HOCs include removal from the roster for 6 to12 months, removal from the roster in conjunction with education for 6 to12 months, education for up to 90 days, notices of deficiency, and limited denials of participation. Other sanctions available to HUD through headquarters include suspension—often used as a temporary measure to stop an appraiser from doing business with HUD until a more serious action can be taken—for up to 12 months or until the conclusion of legal or debarment proceedings; debarment, which removes an appraiser from the FHA roster, generally for up to 3 years; and civil and criminal penalties. In fiscal year 2003 and first half of fiscal year 2004, HUD reports that it suspended 14 appraisers from the roster and did not debar or impose civil or criminal penalties on any appraisers. HUD officials stated that these sanctions are harder to use and less timely so they focus their efforts on those sanctions they can use at the HOC level. Figure 5 illustrates the extent to which the department has made use of each type of enforcement action available, as reported by HUD. In addition, HUD officials explained that by changing the oversight approach to focus targeting efforts on appraisers instead of appraisals they can now better focus oversight efforts on appraisers with known risks. Specifically, they reported that they can now review a smaller number of appraisers and use sanctions more effectively and efficiently. For example, HUD reports that since 1998, the number of removal actions taken by the department has increased, while the number of field reviews and the cost to the agency have decreased, as shown in table 1. In 2002, HUD developed a monitoring and enforcement tool called the Appraisal Review Process, a risk-based appraisal scoring system that scores appraisers who are field reviewed. (Field-reviewed appraisers include those targeted and field reviewed by HUD on a quarterly basis as well as those who were not necessarily targeted but may have been field reviewed for a variety of reasons, including complaints from home buyers and referrals from other HUD offices.) In an attempt to ensure consistency within and across HOCs, HUD designed this tool to (1) weigh each field review question used to assess appraiser performance and (2) recommend actions to be taken against appraisers. The tool provides the rater with a systematic way of thoroughly examining the written appraisal and carrying out the corresponding field review. Based on the desk and field review data, the system yields a recommendation of removal, education, or notice of deficiency. For example, questions associated with appraisal factors that are considered to be of greater risk to the fund—such as the accuracy of market value of the property and characterization of repair conditions—receive a higher weight and automatically result in a removal recommendation if the appraisal exceeds the maximum allowable points. The Appraisal Review Process tool allows HUD to review appraisers and impose sanctions on them in a systematic way. In 1999, we reported that HUD was not holding appraisers accountable for the quality of their appraisals and that the primary reason for HUD’s inability to pursue enforcement actions against poorly performing appraisers was poor record keeping. According to HUD officials, the Appraisal Review Process’s systematic approach to reviewing appraisers’ work and maintaining electronic records of appraisers’ performance has helped the HOCs maintain better documentation. The data manager at each HOC orders cases for each of the targeted appraisers and assigns them to a desk reviewer. Based on the results of the desk review, the desk reviewer can decide that a field review for a particular appraisal is warranted or that no further action is necessary. If a field review is warranted, the appraisal is assigned to a contractor or HUD employee, who measures the quality and accuracy of appraisers’ performance in the completion of the desk-reviewed appraisal and up to four other appraisals prepared by the targeted appraiser and inputs the results electronically into the Appraisal Review Process. Based on the desk and field review data, the system yields a recommendation of removal, education, or notice of deficiency. A HUD rater then looks at the field review score generated by the system, factors in past performance and results from other appraisals, and recommends a proposed action. The branch chief must concur before the appraiser is notified of the action, at which point the appraiser has 20 days to appeal. Figure 6 portrays the major steps of the Appraisal Review Process. According to HUD’s guidance, once a recommended action is affirmed, the appraiser roster is updated to reflect the change. If the appraiser is removed from the roster, lenders cannot assign cases to the appraiser until the appraiser is reinstated. Further, if the appraiser violated any of the laws in the state in which the appraiser is licensed, then the appropriate state regulatory agency is notified. In 1999, we recommended that HUD determine its authority to hold lenders accountable for poor-quality HUD appraisals performed by the appraisers they select from the roster and issue policy guidance that sets forth the specific circumstances under which and actions by which HUD may exercise this authority. In July 2004, HUD issued a final rule clarifying lenders’ accountability for the quality of appraisals on properties securing FHA-insured mortgages. Specifically, the rule provides that lenders who submit appraisals that do not meet HUD requirements are subject to the imposition of sanctions by the department. The rule applies to both sponsor lenders, who underwrite loans, and loan correspondents, who originate loans on behalf of sponsor lenders. HUD believes these changes will help ensure better compliance with appraisal standards and ensure that homebuyers receive an accurate statement of appraised value. The importance of accurate appraisals to HUD’s Single-Family Mortgage Insurance programs underscores the need for effective appraiser oversight. HUD relies on appraisals to ensure that the billions of dollars in mortgage loans it insures annually accurately reflect the value of the homes being mortgaged. Since our April 1999 report, HUD has taken a number of steps designed to ensure the qualifications of appraisers on its roster; improve the efficiency and effectiveness of its oversight, specifically by revising its guidance to focus on appraisers (rather than appraisals) and incorporating a risk-based monitoring approach; and facilitate enforcement actions by empowering HOCs and developing a scoring system to promote consistency. However, certain weaknesses in implementing these initiatives limit their ability to (1) lower HUD’s risk of insuring properties that are overvalued and (2) minimize potential losses to FHA’s mortgage insurance fund. Thus, opportunities exist to enhance HUD’s appraiser approval and monitoring efforts. HUD’s process for verifying that appraisers meet all relevant criteria when applying for placement on its roster lacks effective quality control. An effective control process is essential for HUD to systematically assure and demonstrate that all eligibility criteria are verified with respect to appraisers applying for placement on the roster so they can perform appraisals in connection with HUD’s Single-Family Mortgage Insurance programs. Further, while HUD’s guidance specifies criteria for targeting appraisers based on a set of known risk factors, it does not require the HOCs to target for review appraisers who have been recently sanctioned, even though the HOCs sometimes do so in order to ensure that the problem for which the appraiser was sanctioned has been resolved. Requiring this criterion for targeting appraisers for review could help assure that sanctioned appraisers will not repeat past performance problems. Similarly, HUD does not require that the HOCs maintain historical information, particularly data on the associated default rates of loans, used to target and select appraisers for review. Without this information, HUD cannot demonstrate that the appropriate appraisers are being systematically targeted and reviewed based on its criteria and may be unable to determine the effectiveness of its targeting criteria in reducing risk to the mortgage insurance fund. HUD rarely verifies the work of its field review contractors through on-site evaluations, weakening the department’s ability to ensure that contracted work is actually performed and to accurately assess the quality of the appraisals used to support the loans the department insures. While it entails costs, on-site monitoring is an essential part of any monitoring process and is an important way to verify that work is actually being conducted and to accurately assess the quality of appraisals. To reduce the financial risks assumed by HUD and to further enhance its oversight of appraisers participating in HUD’s Single-Family Mortgage Insurance programs, we recommend that the Secretary of HUD direct the Assistant Secretary for Housing-Federal Housing Commissioner to institute reasonable controls on the process of placing appraisers on the appraiser roster to ensure that applicants’ conformance to eligibility criteria is verified; consider a requirement to include, when targeting appraisers for review, those appraisers who have recently completed a sanction period in order to ensure that these appraisers have corrected their relevant deficiencies; maintain the historical information, particularly early loan default information, used to target appraisers for review in order to ensure that the HOCs target and review appraisers based on the criteria in HUD guidance; and implement a cost-effective field review contractor oversight process that includes on-site monitoring. We provided a draft of this report to HUD for its review and comment. In written comments from HUD’s Assistant Secretary for Housing–Federal Housing Commissioner, HUD agreed with three of our four recommendations, but disagreed with our presentation of its accomplishments as well as some of our findings. The full text of HUD’s comments appears in appendix II. HUD agreed with three of our recommendations. Specifically, it agreed to consider a requirement to include, when targeting appraisers for review, those appraisers who have recently completed a sanction period. Also, the department agreed to modify its system to archive quarterly reports in response to our recommendation that it maintain the historical information used to target appraisers for review. Further, it agreed to consider implementing a cost-effective field review contractor oversight process that includes on-site monitoring. However, HUD disagreed with our recommendation that it institute reasonable controls on the process of placing appraisers on the appraiser roster. HUD commented that our report inaccurately stated that HUD does not document all verifications of appraisers’ eligibility for the roster and has limited quality control over the approval process, noting that the department has a paper record of the application review process whereby each applicant’s eligibility is verified and documented. HUD also noted that we did not review its paper records of the application review process. We modified the report to clarify that our primary concern was quality control, in general, and not solely documentation. We did not review the paper files because it was not our objective to test whether or not specific verifications had been performed, but rather to examine the overall verification and documentation procedures HUD relies on to ensure that appraisers meet its criteria. In doing so we observed a control weakness and we modified the report to clarify this weakness. Specifically, a HUD official conducts quality control reviews over a random sample of the approving employees’ work, but not on a routine basis. Also, HUD does not document these quality control reviews and could not provide evidence that they were performed. At a meeting to discuss the results of our review, HUD’s acting Deputy Assistant Secretary for Single-Family Housing and the Director of the Office of Single-Family Program Development agreed that they do not systematically document that all of the verifications have been conducted and explained that they are developing a contract to establish a system that will track these verifications. They also indicated that they are planning to develop and implement a quality control plan for the appraiser approval process. We modified the recommendation to emphasize that HUD should institute reasonable quality controls on the process of placing appraisers on the appraiser roster. HUD commented that while we acknowledged that it implemented policy and procedural changes, we did not recognize the significance of these changes, and that it is appropriate and necessary for our report to clearly present and highlight these significant achievements. While we agree that HUD has made significant improvements, our objectives concern HUD’s appraiser oversight as it currently exists, regardless of past weaknesses. Nevertheless, we noted a number of specific improvements in the draft report. Specifically, we noted that the number of removal actions taken by the department has increased, while the number of field reviews and the cost to the agency have decreased, as represented in table 1. With respect to HUD’s adoption of a new risk-based approach for monitoring appraisers, we reported that HUD’s process for monitoring appraisers is risk based and that HUD modified its approach to target for review appraisers who are associated with known risks to FHA’s mortgage insurance fund. In addition, we reported that the department issued guidance that required appraisers to, among other things, pass an examination on HUD appraisal methods and reporting. We also noted that HUD issued several rules and mortgagee letters to strengthen its oversight and control of appraisers and improve appraisal quality. HUD also disagreed with the accuracy of some of our findings. Specifically, HUD characterized as inaccurate our statement that in the absence of historical early default rate information, the department may be unable to determine the effectiveness of its appraiser targeting criteria in reducing risk to the mortgage insurance fund. HUD explained that its system directly targets the appraisers that pose the greatest risk to the fund. We concur that HUD’s process is designed to so target, and our draft characterized the approach as risk based and described the specific criteria HUD’s process calls for to target appraisers for review. However, because the HOCs do not maintain a permanent record of the data showing which appraisers met the criteria in each quarter, we could not verify that the appraisers the HOCs placed on their target lists were actually those that met HUD’s criteria. Similarly, without these records, HUD is unable to determine whether the HOCs reviewed those appraisers who met the criteria. In turn, this limits HUD’s ability to determine, over time, the effectiveness of its targeting criteria in reducing risk to the mortgage insurance fund. HUD went on to say that FHA would modify its system to archive quarterly reports in order to maintain the historical targeting records. In addition, HUD disagreed that it conducted limited oversight of field review contractors and that such efforts are affected by limited travel resources. HUD explained that it conducts a 100 percent review of contractors’ work and that the HOCs do not conduct on-site reviews (which may require travel resources) because the 100 percent review method serves as an appropriate and effective risk-control measure. As we noted in our report, HUD’s guidance states that on-site monitoring reviews by HUD staff are essential for high-risk program participants to the extent practicable. Further, HOC officials told us that they are constrained by limited travel resources and so are not able to make on-site visits to properties. At a meeting to discuss the results of our review, HUD’s acting Deputy Assistant Secretary for Single-Family Housing and the Director of the Office of Single-Family Program Development agreed that contract oversight is important but indicated that it is often not cost efficient to send employees on site to review contractors’ work because many of the department’s contractors are responsible for reviewing only a few properties. However, as we reported in our draft, these officials indicated that they are planning to develop a cost-efficient oversight mechanism. Finally, HUD disagreed with our conclusion that weaknesses in implementing its appraiser oversight initiatives limit the department’s ability to (1) lower its risk of insuring properties that are overvalued and (2) minimize potential losses to FHA’s mortgage insurance fund. HUD also stated that our recommendations would not affect FHA’s risk. As we stated in the draft report, we did not attempt to estimate the impact that HUD’s appraiser oversight has on the financial health of FHA’s mortgage insurance fund. While we agree that HUD’s new targeting methodology is intended to reduce risks to FHA, our concern is whether the methodology is operating as intended. Our recommendations relate to the implementation of processes that are directed at controlling and minimizing risk and we continue to believe that opportunities exist to enhance HUD’s appraiser approval and monitoring. We are sending copies of this report to the Secretary of HUD. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have questions or comments on matters discussed in this report, please contact me at (202) 512-6878 or [email protected], or Paul Schmidt, Assistant Director, at (312) 220-7681 or [email protected]. Major contributors to this report are listed in appendix III. To examine how HUD ensures that appraisers it approves to perform appraisals under its Single-Family Mortgage Insurance programs are qualified to be placed on the appraiser roster, we reviewed pertinent HUD regulations and policy guidance and the minimum licensing criteria established by the Appraiser Qualifications Board of the Appraisal Foundation. In addition, we discussed this information with officials from HUD’s Single-Family Housing Office of Program Development. Further, we met with the staff member responsible for maintaining the FHA appraiser roster and observed the process for adding approved appraisers to the roster. To assess the extent to which HUD uses a risk-based approach when monitoring appraisers, we interviewed officials at the four HOCs and observed a demonstration of their quarterly targeting procedures. We reviewed HUD’s risk-based targeting guidance and obtained data for fiscal year 2003 through the first half of fiscal year 2004 from each of the HOCs. We then compared each of the HOCs’ appraiser target lists to their desk and field review lists to determine the number of targeted appraisers that were actually reviewed. Further, from each of the HOCs’ desk review lists, we calculated the numbers of desk reviews performed by the HOCs on each appraiser reviewed in order to assess whether the HOCs have been following HUD’s guidance. To examine HUD’s efforts to take enforcement actions against appraisers it identifies as not complying with its requirements, we reviewed HUD’s guidance regarding enforcement actions taken against poorly performing appraisers. We also discussed enforcement issues with officials from HUD’s Office of Single-Family Housing and the Departmental Enforcement Center. At the HOCs, we discussed the Appraisal Review Process and the HOCs’ ability to sanction appraisers. We obtained data generated from the Appraisal Review Report on HUD’s sanctions imposed between fiscal year 2003 and the first half of fiscal year 2004 and compared this data to the number of field reviews conducted during the same time period. We focused this analysis on removals because removals are the strongest type of action that can be taken at the HOC level. We assessed the reliability of the HUD data we used by reviewing information about how the data were collected, and we interviewed HUD officials to determine the completeness and accuracy of the data provided. We performed electronic testing on the data elements used for our analysis to detect obvious errors in completeness and reasonableness. We determined that these data were sufficiently reliable for the purposes of this report. Finally, we discussed appraiser oversight issues with officials from the Appraisal Subcommittee, the Appraisal Foundation, the Appraisal Institute, the Federal Home Loan Mortgage Corporation, and the Federal National Mortgage Association. We performed our work from December 2003 through August 2004 in accordance with generally accepted government auditing standards. Staff members who made key contributions to this report include Eric Diamant, Mark Egger, Harold Fulk, Nadine Garrick, Curtis Groves, John McGrail, Mark Molino, Josephine Perez, David Pittman, Terry Richardson, and Paige Smith.
Incomplete or inaccurate appraisals resulting in property overvaluations may expose the Department of Housing and Urban Development's (HUD) Single-Family Mortgage Insurance programs--which insured about 3.7million single-family mortgage loans with a total value of about $425 billion in fiscal years 2001 through 2003--to greater financial risks. In 1999, GAO reported on the need for improvements in HUD's oversight of appraisers, which has historically been a challenge for the department. Also, in the past, GAO reported that, due in part to poor oversight of appraisers, HUD's Single-Family Mortgage Insurance programs remained a high-risk area. GAO conducted this review as a follow up to the 1999 report. This report examines (1) how HUD ensures that appraisers it approves are qualified to perform FHA appraisals, (2) the extent to which HUD employs a risk-based monitoring approach, and (3) HUD's efforts to take enforcement action against noncompliant appraisers. Through new guidance and regulation, HUD has strengthened its criteria for placing appraisers on its appraiser roster--which establishes their eligibility to participate in HUD programs. Before 1999, HUD relied largely on the states' licensing processes to ensure that appraisers were qualified, but the states' minimum licensing standards did not specifically include proficiency in HUD's appraisal requirements. HUD's 1999 guidance requires appraisers to, among other things, pass an examination on HUD appraisal methods and reporting. Further, a 2003 regulation provides for, among other things, removing from the roster appraisers whose licenses have been suspended or revoked. However, HUD has limited quality control over the approval process, limiting the department's assurance that its criteria are being effectively implemented. HUD has adopted an oversight approach that focuses on appraisers it believes pose risks to FHA's mortgage insurance fund, but certain weaknesses exist in its implementation. HUD's guidance calls for its homeownership centers (HOCs)--which are largely responsible for appraiser oversight--to develop quarterly targeting lists of appraisers for review based on certain criteria, or risk factors. The primary factor is the rate of defaults in certain loans associated with the appraiser; others include large numbers of appraisals as well as appraisals for loans made under one of HUD's programs known to be at higher risk of fraud and abuse. However, the HOCs do not maintain a permanent record of the data used to identify the targeted appraisers--even though HUD's automated system would enable them to--which limits HUD's ability to verify that those targeted were those that met the criteria and to determine the effectiveness of its targeting criteria in reducing risk to the mortgage insurance fund. GAO found that during fiscal year 2003 and the first half of fiscal year 2004 the HOCs generally reviewed the appraisers they identified as high risk and targeted for review. However, they reviewed fewer appraisals for each targeted appraiser than HUD's guidance prescribes: on average, about 5.6 appraisals instead of the 10 called for. GAO also found that HOC staff did not routinely visit appraised properties to verify the work of contractors whoconduct field reviews of selected appraisers. To facilitate enforcement actions against appraisers, HUD expanded the HOCs' authority to sanction appraisers and developed a new appraisal scoring system. According to HUD, the number of actions taken to remove appraisers from its roster has increased from 25 at a cost of over $10 million in 1998 to 132 at a cost of under $300,000 in 2003. HUD also developed a tool that scores each appraiser on several appraisals, weighting the scores to capture violations that pose the greatest risk to FHA's mortgage insurance fund. According to HUD, this tool allows the department to sanction appraisers more consistently.
The Congress passed initial legislation in October 1988 to bring DOD’s base structure into line with its smaller post-Cold War force structure. Generally, the process, as modified by subsequent legislation, called for (1) establishing independent commissions to recommend installations for realignment or closure and (2) implementing the commissions’ recommendations within 6 years of the date the President sends the commissions’ recommendations to the Congress. The realignment of underutilized bases and closure of unnecessary bases were expected to result in significant savings, primarily from reduced base support costs. The February 1992 DOD Base Structure Report defined base support costs as the overhead cost of providing, operating, and maintaining the defense base structure, including real property maintenance and repair costs, base operations costs, and family housing costs. According to historical information in DOD’s Future Years Defense Program (FYDP) database, in fiscal year 1988 base support costs totaled $41 billion. During that year, most base support costs were paid from the operations and maintenance account (54 percent); the military personnel account (23 percent); and the family housing account (10 percent). The Congress recognized that an up-front investment was necessary to achieve the savings and established two accounts to fund certain implementation costs. These costs included (1) constructing new facilities at gaining bases to accommodate organizations transferred from closing bases, (2) remedying environmental problems on closing bases, and (3) moving personnel and equipment from closing to gaining bases. In addition, revenue generated when land at closing bases is sold is deposited in the BRAC accounts and used to offset one-time implementation costs. Moreover, the legislation required that DOD submit annual budgets estimating the cost and savings of each closure or realignment, as well as the period in which savings were to be achieved. According to its February 1995 budget submission, DOD estimated that, for the first three BRAC rounds, one-time implementation costs will total $16.3 billion and savings will total $16.1 billion, for a net cost of $189.6 million over the period. According to DOD, the $16.1 billion in estimated savings have been or will be reflected as reductions in DOD component appropriation accounts. Once the implementation of the three BRAC rounds is completed in fiscal year 1999, DOD estimates that annual net savings will be $4.1 billion. Our analysis of the FYDP indicates that DOD plans to substantially reduce spending for base support programs. Furthermore, our analysis of operations and maintenance costs at nine closing installations indicates that actual base support costs have been reduced at those installations and therefore savings should be substantial. However, the DOD FYDP and service accounting systems are not configured to provide information concerning actual BRAC savings, and failure to achieve them would affect the quality of base support services or DOD’s ability to fund other programs. Table 1 shows that by fiscal year 1997 DOD expects to reduce annual base support costs by about $11.5 billion from a fiscal year 1988 baseline. The cumulative reduction over the period is about $59 billion. DOD’s information system does not indicate how much of the reduction is due to BRAC versus force structure or other changes. In addition, an Office of the Deputy Assistant Secretary of Defense (Installations) official stated that DOD is reviewing the classification of base support programs in the FYDP, which could affect future analyses. Our analysis of the FYDP shows that, within reduced overall base support spending levels, DOD plans to increase average spending on family housing from $1,880 to $2,730 for each active duty military person between fiscal years 1988 and 1997. Average spending for the remaining base support activities is expected to remain relatively stable over the 10-year period. However, table 2 shows that, over the period, DOD’s force structure is expected to be reduced by 680,000 military personnel and average base operations and real property support costs are expected to fall slightly to about $16,600 per person. Key requirements for calculating actual BRAC savings include information on decreased support costs at closing bases and the offsetting increases at gaining bases. DOD cannot provide accurate information on actual savings because (1) information on base support costs was not retained for some closing bases and (2) the services’ accounting systems cannot isolate the effect on support costs at gaining bases. DOD officials stated that designing and implementing a system for collecting actual BRAC savings information would be difficult and extremely expensive, and they questioned the value of such a system. According to DOD officials, the accounting systems were not designed to isolate the impact of specific initiatives, such as BRAC, on base support costs. With the disestablishment of the 509th Bombardment Wing and closure of Pease Air Force Base, for example, the Wing’s FB-111 bombers were placed in storage as part of a force structure change, while its KC-135 refueling aircraft were transferred to five gaining bases along with their crews, support personnel, and equipment. The largest group of aircraft, six KC-135s, was transferred to Fairchild Air Force Base. According to Air Force officials, their systems would not allow them to determine how much of the reduction in Pease Air Force Base support costs was due to the changing strategic bomber force structure as opposed to the closure of Pease Air Force Base and how much of any increase in Fairchild Air Force Base support costs was due to the arrival of Pease aircraft. Officials stated that, since the arrival of the 6 KC-135 aircraft from Pease, Fairchild has received over 50 KC-135 tankers from other bases. The Army Audit Agency had similar difficulties in determining the actual savings from the closure of 10 Army BRAC I installations. According to the Agency’s November 1995 report, the Army’s system of management controls did not ensure that adequate documentation was retained to determine actual savings or reliable estimates of savings. The report stated, for example, that auditors were unable to locate the accounting records necessary to determine base support cost savings at one site. In addition, they could not determine incremental base support cost increases at gaining installations because the Army’s accounting system did not contain all the necessary information. We analyzed base support costs paid from the operations and maintenance account for the eight installations for which data were available. The analysis shows that the closures will have a combined net cost of $7.6 million for the implementation period, and an annual recurring savings of $212.8 million thereafter. As table 3 shows, four bases (Chase Field, the Long Beach Naval Hospital, Pease, and Williams) are expected to have a net savings at the end of the implementation period, indicating a payback period of less than 6 years. The longest payback period is Fort Devens at about 11 years. Our estimates reflect force structure savings at closing bases and do not reflect incremental base support cost increases at gaining bases unless they were readily identifiable. Additionally, estimated implementation costs do not include economic assistance costs to the area affected by the closure or other costs not reported in DOD’s budget submission. Including these factors would reduce the net savings. However, our estimates also do not reflect savings due to reduced base support costs paid from the military personnel and military construction accounts or reduced family housing costs, which would increase savings. For example, at the three Air Force bases we reviewed, the Air Force estimated military personnel savings at $669.6 million over the implementation period and $156.6 million annually thereafter. DOD expects BRAC savings to provide much of the funding necessary for quality-of-life initiatives and defense modernization efforts. In November 1994, for example, the Secretary of Defense stated that the fiscal year 1996 DOD budget will increase funding by $94 million for community and family support projects, including increasing eligibility for child care support by up to 38,000 families and strengthening programs aimed at preventing family violence. Additionally, in January 1996, the Secretary stated that DOD will need to increase modernization programs to ensure the long-term readiness of defense forces. According to the Secretary, failure to achieve savings from earlier initiatives required DOD to restructure the budget. DOD stated that estimated savings from BRAC are taken out of the services’ budgets up front. This is the same process that was followed in implementing budget reductions under the defense management review initiatives. To the extent BRAC savings are not realized at the levels that were anticipated, it could have similar effects on DOD’s FYDP. DOD’s savings estimates are inconsistent because the services used different estimating methodologies, and are unreliable because the services excluded some savings and did not update some estimates to reflect revised closure schedules. In addition, DOD’s cost estimates are incomplete because the services did not include many BRAC-related nondefense costs. The methodologies used for developing the savings estimates differed among the services. The Air Force used savings estimates that were developed through the Cost of Base Realignment Actions (COBRA) model,with adjustments for inflation and recurring cost increases at gaining bases, as the basis for its estimates. The adjustments accounted for differences in the way inflation and recurring costs were treated in the COBRA and budget estimates. According to an Air Force official, however, major commands and installations were not requested to provide budget-quality data to revise the COBRA savings estimates for their bases. The Navy, on the other hand, used its Comptroller’s analyses of expected increases and decreases in each base’s costs, but no documentation was available to show how specific estimates were calculated. For example, the Navy’s estimate for the Long Beach Naval Hospital savings assumed, among other things, that the Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) costs at gaining bases would be reduced by about $143 million over the 6-year implementation period and about $38 million for each year thereafter. The Navy Comptroller was unable to provide documentation to show how that estimate was calculated. The Army based its estimates on detailed implementation plans prepared by major commands after the BRAC Commissions announced their decisions. Unlike the Navy, however, the Army eliminated CHAMPUS savings from its estimates. Also, unlike the other services, the Army excluded savings from military personnel reductions from its BRAC II and III savings estimates. The Air Force, the Navy, and DOD agencies estimated that BRACs II and III would eliminate the need for about 28,000 military personnel and save about $3.9 billion during the 6-year implementation periods. An Army official stated that military personnel savings were excluded because the reductions had already been recognized in previous initiatives. Closure implementation plans for the three Army bases we examined stated that the installations were authorized 475 military personnel for base support functions. Further, a Navy official stated that Navy estimates were reviewed annually and revised during the budget review process. According to Army and Air Force officials, their savings estimates are not routinely updated, even though some bases close faster than initial estimates, thereby resulting in increased savings. For example, the 1995 Fort Benjamin Harrison savings estimate, which has not changed since it was initially submitted in 1992, does not reflect significant operation and maintenance savings until fiscal year 1995. Our analysis indicates that savings started in fiscal year 1992 and totaled over $92 million by fiscal year 1995. According to a DOD Comptroller official, the Office of the Secretary of Defense provided no additional guidance to the services on developing savings estimates other than the guidance on preparing COBRA estimates. He said that DOD headquarters and the services focused most of their attention on monitoring and managing BRAC costs rather than savings. In March 1996, we reported that DOD’s cost estimates for closing maintenance depots excluded some BRAC-related costs that have been or will be paid from DBOF or the operation and maintenance account. For example, the Navy estimates that, through fiscal year 1995, closing naval aviation depots and shipyards would have an accumulated operating loss of about $882 million that would be recouped from its operation and maintenance account ($695 million) or written off within DBOF ($187 million). Some of this loss was directly related to depot closures. We also reported that closing Army depots had closure-related costs and losses that were financed by DBOF. In fiscal year 1993, for example, the Sacramento Army Depot charged about $12 million in closure-related costs, including employees’ voluntary separation incentive pay, to DBOF instead of the BRAC account. The Navy and other organizations charge separation incentive pay to their BRAC account. In addition to depot-related closure costs, DOD estimates do not include $781 million for the following BRAC-related economic assistance costs, much of which is non-DOD: The Economic Development Administration began providing funds for BRAC-related activities in fiscal year 1992, and has obligated about $371 million for them between fiscal years 1992 and 1995. The Federal Aviation Administration provided about $182 million to BRAC bases through fiscal year 1995. The Department of Labor said it could not readily tell how much it spent on BRAC-related activities between 1988 and 1990. It spent about $103 million on BRAC-related activities from fiscal years 1991 through 1995. This does not include funds distributed to states under block grants or funds spent on DOD demonstration projects, such as projects at Philadelphia and Charleston, because these funds are administered by the states. DOD’s Office of Economic Adjustment provided $125 million to BRAC bases from fiscal years 1988 to 1995. In addition, DOD paid about $500 million in unemployment compensation to civilian employees who lost their jobs from fiscal years 1990 through 1995. According to DOD, the BRAC process resulted in the elimination of about 31,000 civilian positions during that period, which indicates that some unemployment costs could be categorized as BRAC related. Because much of the information necessary to prepare comprehensive and reliable savings estimates for all the installations is no longer available, we are not recommending the revision of these estimates. However, should there be future BRACs, we believe the Secretary of Defense should provide and the services should implement guidance to ensure estimates are comprehensive, consistent, and well-documented. We recommend that the Secretary of Defense, at a minimum, explain the methodology used to estimate savings in future BRAC budget submissions. Also, the submissions should note that all BRAC-related costs are not included. In commenting on a draft of this report, DOD indicated that the inconsistencies in its budget savings estimates we cited were the result of an attempt to give the services reporting flexibility. DOD acknowledged that cost estimates in BRAC budget submissions do not include some costs that were paid from other DOD accounts or from non-DOD appropriations. DOD agreed that the BRAC budget submissions should include an advisory statement that economic assistance and non-DOD costs are not included. DOD also indicated that it was willing to consider including a brief statement that the BRAC budget submissions are based on the initial cost and savings estimates, which are subsequently refined through the use of site surveys. However, DOD did not believe that using different methodologies was a weakness that needed to be reported. To clarify the inconsistencies we found among the services, we have expanded the report to show the differences in (1) the extent to which COBRA estimates were updated and (2) the treatment of military personnel and CHAMPUS savings in the services’ budget estimates. We believe that eliminating the inconsistencies in the preparation of savings estimates for future BRACs would enhance the usefulness of the budget submissions. However, we deleted the term weakness in describing the differences in the various methodologies. With regards to non-DOD costs, the information on many excluded costs is readily available. For example, information on $781 million in BRAC-related economic assistance costs incurred through fiscal year 1995 was readily available from various agencies. We also believe that including information from the other agencies would give the Congress a more comprehensive overview to use in evaluating the success of BRAC implementation. DOD comments are presented in their entirety in appendix I. We reviewed reports, documents, and legislation relevant to BRAC cost and savings estimates. We also interviewed BRAC and Comptroller officials from the Office of the Secretary of Defense and the military services. From officials of the Departments of Labor and Commerce and the Federal Aviation Administration, we obtained data on their BRAC-related costs. Our examination of cost and savings estimates focused on BRACs I through III because DOD had not yet developed BRAC IV estimates at the time we initiated our review. In addition, we focused our analysis of actual costs and savings on BRACs I and II because many BRAC III installations were still being closed. For our analysis of actual savings, we analyzed trend data from DOD’s historical and current FYDP databases, which were updated through June 1995. We identified base support costs by examining program element titles and discussing the costs with officials in DOD’s Office of Program Analysis and Evaluation and from the military services. We did not assess the reliability of the FYDP database. We also attempted to obtain information on actual base support costs for nine closures. We selected the closures from a listing of BRACs I and II to obtain three closing installations from each of the military services, and to ensure each closing installation was from a different major command. For one of the nine installations selected, base support cost data was not available. Where possible, we obtained actual base support cost data for the operation and maintenance account from the responsible major command. Our estimates of base support cost reductions at closing installations and incremental increases at gaining bases were based on major command estimates or our analysis of trends in the closing and gaining bases’ actual support costs. We estimated fiscal years 1995 and 1996 costs on the basis of fiscal year 1994 costs. Our analysis of the nine bases cannot be projected to all BRAC bases. While overall trends indicate substantial savings, it is possible that net savings may not be achieved at an individual location. We conducted our work from March 1995 to February 1996 in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days after its issue date. At that time, we will send copies to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. If you have any questions concerning this report, please call me on (202) 512-8412. Major contributors to this report are listed in appendix II. John Schaefer Eddie Uyekawa The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) cost and savings estimates for past base realignment and closure (BRAC) actions, focusing on the: (1) extent to which DOD is achieving actual savings from BRAC; and (2) adequacy of the DOD process for developing the cost and savings estimates reported in its annual budget submissions. GAO found that: (1) its analysis of base support costs in the Future Years Defense Plan and at nine closing installations indicates that BRAC savings should be substantial; (2) however, DOD's systems do not provide information on actual BRAC savings; (3) therefore, the total amount of BRAC savings is uncertain; (4) if DOD does not fully achieve estimated BRAC savings, it will affect DOD's ability to fund future programs at planned levels; (5) DOD has complied with the legislative requirement for submitting annual cost and savings estimates, but there are limitations to the submissions' usefulness; (6) for example, the Air Force's savings estimates were not based on budget-quality data, and the Army's estimates excluded reduced military personnel costs that the Navy and the Air Force included in their estimates; (7) further, BRAC cost estimates excluded more than $781 million in economic assistance to local communities as well as other costs; and (8) consequently, the Congress does not have an accurate picture of the savings achieved by the BRAC process.
Land use substantially affects the type and extent of nonpoint source pollution of water bodies. For example, soil erodes naturally from undisturbed land, but the amount of erosion can increase manyfold when trees are cut or the land is farmed. In addition, when land is used for housing or urban development, erosion from land clearing and excavation during construction can increase tremendously. Moreover, land use activities can also produce toxic pollution. For example, pesticide use in farming has resulted in toxic runoff, and mining has produced leaching of heavy metals and acid mine drainage. Nonpoint source pollution can have long-lasting impacts. For instance, a heavy rain can wash tons of soil from a field, and the material can either scour a streambed or settle out and cover gravel that fish spawn in. Long after the water itself clears, populations of fish and other aquatic life may still not have recovered. Similarly, when trees and bushes are cut next to stream banks, debris falling into the stream or washing into the water may initially degrade the water, but a longer-term problem may be caused by persistent elevated water temperatures resulting from the removal of shade. In time, altered water temperatures can make the stream a less sustainable habitat for fish and other animals or may make it totally uninhabitable. For nearly 40 years, the Clean Water Act has played a critical role in reducing water pollution and improving the health of the nation’s waters, including rivers, lakes, and streams. The purpose of the law is to restore and maintain the chemical, physical, and biological integrity of the nation’s waters. Passed by Congress in 1972, the act marked a shift in clean water policy, establishing a significant federal role in controlling point sources of water pollution. Through the 1980s, EPA issued a series of reports, including a key 1984 report to Congress, finding that in the decade following passage of the act, control of point sources had resulted in significant achievements toward water quality goals but that these point source reductions had illuminated the nonpoint source contribution to For example, at that time, a majority of EPA water quality problems.regions identified nonpoint sources as the principal remaining cause of water quality problems. In 1987, Congress amended the Clean Water Act, adding section 319 and creating the nonpoint source management program. Section 319 provides for annual grants to be administered by EPA and incentives for states to develop and implement nonpoint source management programs. Section 319 includes various minimum conditions that states must meet to receive grants, including the development of nonpoint source management programs—which EPA must approve—and annual reports on states’ progress in achieving the goals of their management programs. States must also obtain a determination from EPA that they made “satisfactory progress” in meeting their goals from the prior year. For its part, EPA has discretion by statute to add terms and conditions to grants; to require additional information on applications; and to request additional information, data, and reports it considers necessary to determine continuing state eligibility for grants. The 1987 amendments to the Clean Water Act created the nonpoint source program, but EPA did not receive an appropriation to implement the program until fiscal year 1990. In the early 1990s, EPA focused on developing technical support for states’ use in developing and implementing their programs. During these initial stages, some states and EPA regions focused their nonpoint source programs narrowly on demonstrations of particular pollution control technologies. In response to EPA program guidelines issued in 1996, states upgraded their programs, and in so doing, several states incorporated watershed-based approaches as a significant and sometimes central organizing theme. These states focused their pollution reduction efforts in specific watersheds—areas of land through which all the rainfall and streams flow downhill toward a main river channel (see fig. 1). According to EPA documents, state nonpoint source programs that adopted this approach improved their capacity to solve nonpoint source pollution problems. In the early 2000s, EPA encouraged states to sharpen the focus of their nonpoint source management programs toward impaired water bodies. Specifically, EPA and the states determined they needed to target their efforts to reduce nonpoint source pollution within defined geographic areas representing the most severe water quality problems. According to EPA documents, the two key steps states need to solve nonpoint source problems at the watershed level are the development of a watershed- based plan that addresses water quality needs within a watershed and the actual implementation of the plan. In 2003, EPA issued Federal Register guidelines for the section 319 program, which remain in use today. These guidelines followed a substantial increase in appropriated funds. Key features of the guidelines include the following: The guidelines direct states to use about half their section 319 grants to develop and implement watershed-based plans for impaired watersheds. Beyond targeting funds to geographic areas in need of restoration, the guidelines also allow states to fund activities that generally support section 319 goals, such as technical assistance, staffing, projects that demonstrate innovative approaches to pollution reduction, and education programs that promote awareness and changes in behavior. EPA’s 10 regional offices, which provide oversight of the section 319 program, are to place special emphasis on reviewing states’ progress in developing and implementing watershed-based plans according to the guidelines. Regional offices are to review and discuss with states the projects states select for section 319 funding to ensure the plans’ effective implementation. It is through this review process that regional offices have the opportunity to influence the types of projects states select if they believe that the projects selected by a state are not adequate to effectively reduce nonpoint source pollution. States are encouraged to leverage section 319 funds with projects from other federal programs that have water quality objectives, including USDA’s Environmental Quality Incentives Program (EQIP).This program is designed to fund conservation practices on working agricultural land to achieve national priorities, including reducing soil erosion and nonpoint source water pollution. The guidelines state that section 319 funding is especially suitable for supporting activities that either are ineligible for or typically do not receive significant USDA funding, including developing watershed-based plans in impaired watersheds, monitoring water quality, and funding staff to work with local communities to help assist and promote the development and implementation of watershed-based plans. Under EPA’s section 319 program, states retain the primary role for addressing nonpoint source water pollution. EPA’s 10 regional offices annually distribute program funds to the states using a formula that is weighted heavily toward state population and the number of acres in agricultural crop production. States develop their own project selection processes and the criteria that their nonpoint source management programs will consider when determining what projects to fund. Annually, each state submits its list of selected projects to the EPA regional office for incorporation into the state’s work plan, which describes what projects will be funded through section 319. Organizations that apply for section 319 funds—often including conservation districts, local governments, and nonprofit organizations—submit project proposals to states’ nonpoint source management programs and, if selected, are responsible for implementing their proposed nonpoint source pollution projects under an agreement with the state. Section 319 is a nonregulatory program, and many states therefore rely primarily on voluntary approaches to address nonpoint source pollution. The programs’ nonregulatory status, combined with private ownership of much of the nation’s land, means that securing voluntary landowner participation is a key aspect of nonpoint source pollution control and, according to EPA, can introduce significant uncertainty in how and when projects are implemented. Under EPA’s section 319 program, states have funded many projects that have helped successfully address nonpoint source pollution and restore and protect water bodies across the country, but states have also funded projects that have encountered significant challenges—including many that could have been prevented. This section discusses (1) the types of projects states have selected to address various categories of nonpoint source pollution, (2) the successes some section 319-funded projects have achieved in restoring impaired water bodies, and (3) projects states have funded that encountered preventable challenges. States’ nonpoint management programs have used their annual section 319 grants from EPA to fund projects that address different categories of nonpoint source pollution. The scope of individual section 319 projects varies considerably; common activities include direct implementation of conservation practices, education and outreach efforts, water quality monitoring, and funding of state nonpoint source management program staff. According to data from EPA’s Grants Reporting and Tracking System, from fiscal year 2004 through fiscal year 2010, states awarded more than $1.2 billion in section 319 funds to more than 5,800 projects. Projects have been funded in all 50 states and by many tribes and have been targeted to seven different categories of nonpoint source pollution, mainly agricultural, urban and stormwater runoff, and hydromodification (see fig. 2). States funded section 319 projects that supported a mix of direct and indirect approaches to help restore, protect, or prevent further degradation of water quality. Direct approaches generally involved projects implementing conservation practices or other corrective actions to directly reduce or eliminate pollutants entering a water body. Common direct approaches included, but were not limited to, agricultural conservation practices, such as the installation of fences to exclude cattle from shorelines or stream banks; erosion control projects; and stormwater discharge projects such as the installation of surfaces in parking lots that absorb rainfall rather than allow it to run off into urban streams. Indirect approaches typically involved activities to help build state and local capacity to address nonpoint source pollution, raise public awareness, and assess water quality in particular places of concern and commonly involved methods such as education and outreach, watershed planning, and staffing. According to our analysis of EPA data, approximately 45 percent of projects that states funded in fiscal years 2004 through 2010 under section 319 involved direct approaches and were designed primarily to implement activities to directly restore, protect, or prevent further degradation of water quality. The categories of pollution that were the most common focus of direct restoration approaches—such as implementing agricultural conservation practices, stabilizing stream banks, or restoring a stream’s natural channel configuration—include agricultural pollution and pollution resulting from urban and stormwater runoff (see fig. 3). According to our analysis of EPA data, approximately 55 percent of projects that states funded in fiscal years 2004 through 2010 under section 319 were designed primarily to implement activities that indirectly help restore, protect, or prevent further degradation of water quality. We classified such projects into the following six broad groups based on the type of indirect activities they were primarily designed to support (see fig. 4): Planning activities include the development of various planning documents designed to help identify and address nonpoint source pollution, such as watershed-based plans and total maximum daily loads (TMDL) for pollutants. Education activities include statewide and local education and information projects, such as educating local officials about the causes and effects of nonpoint source pollution or developing nonpoint source-related educational curriculums for use in schools. Water quality monitoring and assessment activities include biological monitoring and assessments to determine water body health and monitoring the effectiveness of conservation practices. Management and staff activities include funding the administrative and personnel costs associated with state nonpoint source management programs, as well as supporting other program management efforts, such as funding watershed coordinators at the local level. Technical assistance activities include activities such as engineering assistance related to implementing conservation practices provided to state or local entities. Other activities include a variety of regulatory and enforcement activities, as well as activities related to groundwater and soil analyses. States have used section 319 funds to improve the condition of water bodies impaired by nonpoint source pollution. EPA reported that as of December 2011, restoration efforts supported by section 319 funds had helped 49 states partially or fully restore 356 water bodies that were listed under section 303(d) of the Clean Water Act as impaired by nonpoint source pollution. In addition, many other section 319-funded projects continue to restore portions of water bodies and may help them attain their water quality standards or designated aquatic uses in the near future. According to our survey results, project managers for 72 percent of the projects reported that their projects accomplished all objectives originally identified in the project proposal. In addition, water quality improvements resulting from section 319-funded projects have been demonstrated across a variety of categories of nonpoint source pollution. For example: Agricultural runoff: Since 2007, Pennsylvania has awarded more than $700,000 in section 319 funds to three projects to help implement the Hungry Run Watershed Implementation Plan. Two of these projects focus on installing suites of complementary agricultural conservation practices on farms located in the watershed, such as stream bank fencing, riparian buffers (undisturbed or planted areas along stream banks), and cover crops to address agricultural runoff. State and local officials we spoke with said that water quality monitoring data suggest that as a result of these efforts, Hungry Run may be removed from Pennsylvania’s list of impaired waters in the future if its water quality continues to improve as a result of the department’s efforts. These projects also illustrate the value of putting multiple conservation practices in place at the same location to increase the overall effectiveness of water quality restoration efforts, according to the project manager. Urban and stormwater runoff: In Michigan, urban development and its associated runoff led to the impairment of Malletts Creek, a tributary to the Huron River. To help address this impairment, the Michigan Department of Environmental Quality has allocated more than $230,000 in section 319 funds to a project designed to implement stormwater conservation practices at a local library containing a portion of the creek on its property. Practices implemented under this project included a vegetated green roof on the library to absorb rainwater, as well as basins filled with native vegetation to help slow the flow of water from the library’s parking lot into the creek. Water quality testing showed that the project’s conservation practices reduced the pollutants in stormwater leaving the site, including a greater than 40 percent reduction in copper, lead, oil, and grease, along with a 66 percent reduction in zinc. Hydromodification: In Washington state, the natural flow of Jimmycomelately Creek was altered in the past to facilitate farming and the construction of roads and buildings. One effect of this alteration was a significant decline in chum salmon in the creek during the 1990s. Using approximately $300,000 in section 319 funds awarded to the Jamestown S’Klallam Tribe, along with several million dollars from other federal, state, and local sources, the effort to restore Jimmycomelately Creek involved removing roads and buildings, altering the creek channel, and planting native vegetation. A tribal official involved with this project told us that the section 319 funding helped the restoration effort include an additional tributary of the creek. The creek is still listed as impaired on Washington state’s 303(d) list, but the tribe reported that more than 4,000 summer chum salmon returned to the stream to spawn in 2010 (up from 7 fish in 1999). The tribal official told us that this biological indicator is an important sign that the project was successful at reducing pollution and restoring water quality in the creek. Resource extraction: In West Virginia, the Department of Environmental Protection has used more than $625,000 in section 319 funds to install treatment systems that remove metals and neutralize acidic water draining from abandoned coal mines in the Lambert Run watershed. The acidic water drains from the mines into Lambert Run, a stream that Department of Environmental Protection staff described as having such high concentrations of iron and aluminum that fish and other water-dwelling life had not been able to survive before the installation of the treatment systems. The treatment systems channel mine drainage into ponds, where the acidic water is neutralized through contact with limestone, and metal pollutants are removed. As of February 2011, EPA reported that the treatment systems have helped to restore approximately 2.3 miles of the original 4.4 miles of impaired stream, and additional restoration efforts focused on the water body were in progress. State officials we spoke with said that monitoring data indicate that within 3 or 4 years, the stream may be removed from the state’s list of impaired water bodies. Some states have directed section 319 funding toward projects that did not achieve their objectives, and many projects that did still faced challenges. Specifically, projects that relied on voluntary participation sometimes did not achieve goals when third-party buy-in was not secured in advance. Others sometimes used indirect approaches (e.g., community outreach) that did not have a clear connection to achieving tangible water quality results. Nevertheless, in recent years some states have adopted more rigorous project selection processes to avoid these challenges. According to our survey results, project managers for 28 percent of all projects that involved implementing conservation practices or pollution remediation techniques reported that their projects were unable to accomplish all objectives originally identified in the project proposal.These projects were generally unable to implement the desired number or type of conservation practices or to implement them in the originally proposed locations. Moreover, of the 72 percent of the projects that project managers reported achieved their originally proposed objectives, almost half did so only after encountering significant challenges that prevented them from finishing on schedule, staying on budget, or achieving the desired levels of pollution reduction. Many of the challenges that project staff reported facing resulted from bad weather, staff turnover, or other factors outside their control. Nevertheless, of the 132 project managers that submitted narrative responses to our survey, 71 (54 percent) cited challenges that generally could have been identified and mitigated before projects were proposed or selected for funding. For example, project staff’s inability to secure third-party buy-in, such as landowner cooperation to implement the projects as intended, was the most commonly identified challenge (49 out of 132 responses to the question, or 37 percent). The 95 percent confidence interval for this estimate is (23, 34). A project on the Illinois River was to reduce pollution by implementing conservation practices in urban and forested areas, such as rain gardens planted with native plants, which absorb urban and stormwater runoff, and prescribed burns on forested lands. The state of Illinois provided section 319 funds to a regional planning organization to (1) put in place 2,500 urban and stormwater management practices and (2) implement prescribed burns on 1,000 forest acres. The organization, however, did not implement the project as proposed because, after receiving funding, it was unable to compel landowners to implement the practices on private property; the organization had not secured the landowners’ consent before applying for the funding. As a result, of the intended 2,500 urban stormwater practices, 11 were ultimately implemented, and forest conservation measures were implemented on 282 of the intended 1,000 acres because prescribed burns could not be done. In West Virginia, Department of Environmental Protection officials selected a project that was to subsidize the cost to homeowners of pumping and replacing damaged septic systems in rural areas, among other practices. The department awarded a nonprofit organization nearly $450,000 to implement the project, of which $285,000 was for the septic system component. A departmental official explained that the project was designed to be a 5-year project—3 years for implementation and 2 years for monitoring. In 2011, the project was in its third year of implementation, but as of November 2011, even though there was no cost to homeowners for the project, a single homeowner had signed up to have a septic system replaced. A representative from the nonprofit organization said that when the project was proposed, project staff had not verified whether landowners in the project area would participate. Upon receiving section 319 funds, the representative told us that the project did not receive much interest from the community and that her organization may have to return unspent funds to the Department of Environmental Protection to be reallocated to another project. In Arizona, Department of Environmental Quality officials selected a project that aimed to install hundreds of native willows in two areas on the shore of a lake impaired by high levels of nitrogen and phosphorus. The department awarded section 319 funds to a nonprofit organization to plant the willows, whose root systems would absorb polluted runoff and thus help prevent nutrients from entering the lake. The project manager reported that after section 319 funds were received, project staff advertised the project to community members and reached out to landowners near the lake. Some landowners, however, were unwilling to have the nonprofit organization plant willows on their properties, according to the project manager’s report. Some willow trees were ultimately planted in one of the two areas, but the report concluded that the project’s water quality goals were not achieved and that no measurable reduction in pollutants came from the planting. Securing voluntary third-party cooperation ahead of time is particularly important with agricultural projects, whose success largely depends on the implementation of a suite of complementary conservation practices. Agricultural projects by their nature can be particularly challenging for securing third-party participation to implement them in ways that effectively reduce nonpoint source pollution. According to EPA officials and USDA research, agricultural projects address nonpoint source pollution best when conservation practices are implemented as part of a suite of complementary practices. When landowners choose to implement conservation practices without all of the proper companion practices, however, assurance may be reduced that the practices will result in the intended water quality benefits, according to state environmental protection officials we spoke with. The need for complementary conservation practices to protect water quality is often evident on lands where livestock graze. On such land, for example, installing livestock exclusion fencing in isolation does not always ensure that the negative water quality effects of grazing will be significantly reduced unless additional practices are put in place.Livestock exclusion fencing is largely ineffective without riparian buffers, which help prevent stream bank erosion and absorb nutrients from manure, according to the nonpoint source coordinator for West Virginia’s Department of Environmental Protection. When riparian buffers are absent, livestock still congregate near stream banks, altering stream ecosystems and allowing sediment and manure to enter the water. Pennsylvania’s Department of Environmental Protection funded a project that demonstrated the success of enlisting landowner cooperation to install a suite of complementary practices to protect water quality on land where livestock graze. For example, the department awarded $1.2 million in section 319 funds to a project in Mifflin County, Pennsylvania, in which a conservation district installed suites of conservation practices on several farms to keep livestock from congregating near an impaired stream and entering it. Such practices included livestock exclusion fencing, riparian buffers, stream crossings, and off-stream watering facilities for livestock. In this case, conservation district staff said they proposed this project because the landowners were willing to install all of the complementary practices needed to keep livestock away from the stream. Further, this project is likely to help restore the stream to such a condition that it will soon be removed from the state’s list of impaired waters, according to the project manager. State nonpoint source management program officials in one state we visited said that because section 319 is a voluntary program, they hesitate to require participating landowners who are willing to install a given conservation practice to also install all the complementary practices that may be needed to ensure that water quality is protected. For example, in Arkansas, the state’s nonpoint source management program encourages certain agricultural conservation practices to be installed together as a suite to maximize water quality benefits, but it does not require section 319 fund recipients to implement all these practices on agricultural land. One state official acknowledged that this process does not always result in implementing the conservation practices that would produce the greatest water quality benefit and that landowners may resist implementing the most effective conservation practices. Consequently, less effective practices are sometimes chosen for projects to ensure sufficient landowner participation, he said. Our analysis of EPA data indicated that section 319 funds may have sometimes paid for conservation practices to be put in place without all of the proper companion practices. Our analysis of EPA’s Grants Reporting and Tracking System data showed that when section 319 funding was used to install livestock fencing, more than half the time, the fencing may have been installed without all of the proper companion practices—those practices that state environmental protection officials and USDA research show are needed to reduce nonpoint source pollution from livestock grazing. Specifically, in projects from 2004 through 2010 where about 700 separate livestock exclusion fences were installed, the following additional practices were installed with the fencing: about 50 stream crossings (7 percent), 260 riparian buffers (37 percent), and 225 watering facilities or troughs (32 percent). (The data do not reflect instances where additional practices may have already been in place before the fencing was installed.) In November 2011, EPA issued a national evaluation of the 319 program. According to this report, projects that use indirect approaches provide funds to enable the states to work with federal, state, local, private-sector, and watershed groups to gain cooperation and to leverage dollars, authorities, and other resources to solve or prevent nonpoint source pollution problems. These funds also provide critical support for state staff to conduct project planning and selection, monitoring, and building of partnerships, which are critical to ensure successful implementation of watershed-based plans. Environmental Protection Agency, A National Evaluation of the Clean Water Act Section 319 Program (Washington, D.C.: November 2011). lessons learned from funding such projects to help inform their decisions on whether to fund similar projects in the future. One section 319-funded project in Arkansas had the stated purpose of training teachers and conservation district employees to teach a conservation education curriculum, so that they in turn could encourage students to participate in a conservation program, thereby developing students’ appreciation and awareness of natural resources. Although designed to teach students about nonpoint source pollution, the project did not have a clear link to tangible reductions in nonpoint source pollution or changes in behavior stemming from the use of section 319 funds. According to the state’s nonpoint source program director, his staff found that while the curriculum materials this project funded were well received by many teachers, staff were unable to ascertain the effectiveness of the project and whether it resulted in any behavioral change. A project in California aimed to implement conservation practices in a creek that was impaired in part by sewage disposal from septic systems. To do so, the project sought to increase community education through several outreach initiatives, among other activities. Project staff held workshops and community events, which about 200 people attended, and conducted follow-up surveys, which showed that 80 to 90 percent of residents had increased their understanding of environmental conditions and watershed pollution. Nevertheless, in the project’s final report, project staff concluded that changing the habits of residents to actually implement conservation practices— such as pumping septic systems, planting streamside vegetation, and limiting fertilizer use—was much more complicated than originally anticipated. Despite increased levels of awareness, the report concluded, few conservation practices were implemented. In West Virginia, the state nonpoint source program director told us that the state’s department of environmental protection used section 319 to fund a project to promote best management practices on oil and gas drilling sites and access roads. The project was designed to establish a training program for company inspectors that would help them identify drilling sites and access roads contributing to sediment runoff into impaired waters. The training would help company inspectors promote the design and implementation of management practices in these areas to reduce runoff and encourage compliance with such practices. This project promoted positive practices, but according to a departmental official, few management practices were ultimately implemented. In addition, he said, the project was not directly linked to specific water quality outcomes, such as the number of sites and roads on which practices were to be implemented, and he will likely not use section 319 funds to fund similar projects in the future. Projects using section 319 funds to pay for staff to promote enrollment in USDA conservation programs have also sometimes lacked approaches and intended outcomes that were directly aligned with addressing nonpoint source pollution. Project managers with whom we spoke said that when such projects do not contain concrete deliverables, such as the number and specific location of acres enrolled in conservation programs, there is limited assurance that the most vulnerable land will be protected. For example, Kansas awarded $225,000 in section 319 funds to a project whose objectives were to provide dedicated assistance to USDA’s Conservation Reserve Program (CRP) and eliminate delays in providing Funding program enrollment assistance to agricultural crop producers.was used to hire from 24 to 30 part-time staff to help interested landowners enroll in the CRP program, complete paperwork, and enroll acres in the program in shorter-than-typical time frames. According to the state project manager, the project did not include a specific focus on promoting CRP in areas of concern to nonpoint source water pollution. She also told us that the state Department of Health and Environmental Services has since concluded that projects such as these are not always the most cost-effective because the number of vulnerable acres enrolled has not been proportional to the resources invested. As a result, the number of funded staff decreased to 8 the next time the project was proposed, and the state project manager said that such projects may not be funded in the future. Similarly, West Virginia’s Department of Environmental Protection used section 319 funds for several projects aimed, in part, at enrolling landowners in USDA’s CRP and Conservation Reserve Enhancement Program (CREP). According to the state’s nonpoint source program director, the department has not had much success in reducing nonpoint source pollution when section 319 funds are used to enroll land in CRP and CREP because pieces of land contributing the largest amounts of agricultural runoff are not always enrolled. West Virginia’s nonpoint source program director told us that as a result, he will be reluctant to use section 319 funds on projects aiming to enroll landowners in these programs. In November 2011, EPA issued a national evaluation of the 319 program and found that states’ success in controlling agricultural nonpoint source pollution when funding these types of indirect projects has been mixed. The report noted that on one hand these projects can help develop and strengthen key partnerships among federal programs and are critical for making significant progress in remediating large numbers of water bodies impaired by nonpoint source pollution. Coordinated efforts between state nonpoint source programs and NRCS state conservationists has occurred in about one-half of all states’ farm bill program funding, which is distributed in whole or in part in accordance with the states’ nonpoint But EPA source program goals and priorities, according to the report.also reported that many states have had difficulty obtaining significant, broad-based, recurring support from USDA programs for nonpoint source program priorities. Many states therefore identified improved coordination and collaboration with USDA programs as a key nonpoint source program goal, according to EPA’s report. Our review of some states’ experiences and EPA’s 2011 evaluation report has shown that the challenges associated with third-party participation and projects whose approaches are not clearly linked to tangible water quality results can largely be avoided when states use more rigorous project selection processes. For example, in 2006, Ohio’s nonpoint source management program staff examined the types of organizations that commonly received its section 319 funding and found that more than 70 percent of funding had gone to soil and water conservation districts, county health departments, and regional planning agencies, which typically have little authority to address water quality problems, according to Ohio’s nonpoint source program director. Ohio’s program staff found that these organizations often used section 319 funding to pay for staff salaries without getting a proportionate improvement in water quality from the projects they implemented. Further, when funds were used for implementing projects, these organizations’ projects were not implemented as anticipated in the grant application—typically resulting in substantially less water quality improvement than originally intended. For example, in 2004 Ohio’s EPA allocated $500,000 under section 319 to one agricultural project to install 15 different complementary conservation practices in an impaired watershed. The organization that received the grant, however, was unable to convince farmers in the impaired watershed to adopt the conservation practices, and after numerous grant revisions and extensions, it implemented 3 of the 15 planned conservation practices, according to Ohio’s nonpoint source management program director. Instead, he told us, the majority of funding was used to purchase 111 pieces of equipment for tilling fields and handling and transporting manure, which can help reduce the amount of nutrient runoff entering nearby water bodies. What had started as a comprehensive approach to changing farmers’ behavior and farming practices evolved into an agricultural equipment acquisition project because agricultural equipment was what the landowners wanted, he explained. In addition, Ohio’s program review showed that section 319 funds were being used to support payroll for 27 full-time-equivalent staff positions in local organizations for some projects whose indirect approaches did not contain objectives or deliverables that addressed nonpoint source pollution problems, according to the nonpoint source management program director. Further, the nonpoint source management program director told us that the water quality results were not proportionate with the investment. Following the 2006 review, Ohio’s nonpoint source management program changed its project selection criteria to favor grantees with authority to implement projects on the ground or projects for which any necessary landowner buy-in was secured in advance. Ohio EPA’s application process now requires that specific properties for proposed projects be identified before grants are awarded and that assurances from property owners are obtained in advance, so that conservation practices can be implemented as planned. These programmatic changes have also prompted Ohio’s nonpoint source program to fund fewer projects that rely on indirect approaches. Consequently, since 2007, Ohio’s nonpoint source program has funded fewer than 5 full-time-equivalent positions in local organizations. Ohio EPA’s nonpoint source program director told us that these changes have helped the agency make significant progress in achieving the state’s water quality goal. Other states have changed their project selection processes along the same lines. In its November 2011 report, EPA found that as part of 15 states’ project selection processes, nonpoint source program staff coordinate with project staff at the local level before selecting proposed The report goes on to say that this projects for section 319 funding. preproposal coordination helps increase local understanding of state nonpoint source program priorities, identifies potential project partners, gauges local receptivity to projects, and provides greater assurance of potential project success. These efforts typically improve the quality of proposals and, ultimately, water quality results from section 319-funded projects, according to EPA’s report. For example, Colorado’s nonpoint source management program increased the rigor of its project selection process by working with local officials to identify the highest-priority water quality issues so that the state can better support the projects that will be most effective in addressing them. The report notes that 20 states’ project selection processes explicitly take into consideration a project’s feasibility for successful implementation. EPA regional offices have varied widely in the extent of their oversight and the amount of influence they have exerted over state nonpoint source management programs. In addition, EPA’s primary measures of effectiveness of state management programs may not always demonstrate the achievement of program goals—which are to (1) eliminate remaining water quality problems and (2) prevent new threats from creating future impairments—or reflect the achievements of some critical state activities for reducing nonpoint source pollution. EPA’s 10 regional offices varied in their oversight of states’ nonpoint source management programs and the extent to which they influenced the projects states funded through section 319. This variability is seen most notably in regional offices’ reviews of states’ annual work plans and project selection criteria, which are to describe the activities that states’ nonpoint source management programs plan to undertake in the upcoming year and the parameters for which projects are eligible to receive section 319 funds from the state. To oversee states’ nonpoint source management programs, EPA regional offices by regulation are to determine, before annually awarding section 319 funds to states, that achievement of states’ proposed work plans is feasible—which means that states are to demonstrate that the projects described in the work plan can be implemented. Officials from most regional offices reported to us that they do not assess the feasibility of specific projects. Nevertheless, regional offices have almost always determined that states have made satisfactory progress in achieving their program goals—a condition that must be met for states to receive section 319 funding the following year. Among their responsibilities for oversight of state nonpoint source management programs as provided in regulation and guidance, EPA’s 10 regional offices perform two key functions. First, they review state nonpoint source management program plans, which are to identify states’ goals for addressing nonpoint source pollution. Second, they review states’ annual work plans and project selection criteria, which are to describe the activities that states’ nonpoint source management programs plan to undertake in the upcoming year to meet program goals and the parameters for which projects are eligible to receive section 319 funds from the state. Regional offices have varied in the extent of their review of states’ nonpoint source management program plans, which are to ensure that states align the goals of their programs with the highest-priority water quality impairments. Some regional offices have encouraged states to modify their plans, whereas other regional offices have not. For example, regional office officials reported to us that Region 5 has encouraged all of its states to revise their management plans within the past 5 years, whereas Region 1 has not encouraged any of its states to update their plans since 1999. Section 319 guidance to states indicates that they should update their nonpoint source management plans if EPA finds that the practices and measures proposed in such plans are not adequate to reduce the level of nonpoint source pollution. Overall, EPA’s 2011 report evaluating states’ implementation of the section 319 program found that EPA regional offices have not required 28 states to upgrade their As a nonpoint source management program plans since 1999 or 2000.result, according to the report, these states’ plans play a diminished role or are simply ignored in the current implementation of the states’ programs and do not adequately reflect innovations that have become available during the past decade, including watershed-based planning and low-impact development. One primary reason for the variations in oversight among the regional offices could be that EPA headquarters has not issued specific implementing guidance to the 10 regional offices on how they are to fulfill their regulatory oversight responsibilities for the 319 program. Regional offices have also varied in the extent of their review of each state’s annual work plans and project selection criteria. Officials from three regional offices told us that they reviewed annual work plans in depth and played an active role in influencing the types of projects selected. For example, Region 4 officials reported that they helped ensure that several states within the region targeted their section 319 funds to severely impaired watersheds and, before granting funding, selected project applicants who were willing to implement and capable of implementing projects. Carolina established a priority-setting system in collaboration with the regional office, which helped state staff review and rank project proposals. Final project selections were generally made after proposals were ranked and reviewed by various stakeholders, including Region 4 officials. An official with another regional office told us that this kind of process provides regional offices with the opportunity to guide states to spend funds on certain geographic regions or to encourage implementation of projects in watersheds that have a watershed implementation plan and willing landowners. Region 4 states include Alabama, Florida, Georgia, Kentucky, Mississippi, North Carolina, South Carolina, and Tennessee. modifications to the state programs’ annual goals. Nevertheless, most of the 10 regions reported that although they provide feedback to states on specific project proposals before states select projects, they do not systematically assess the merits and feasibility of specific projects. As noted above, EPA headquarters has not issued specific guidance to the 10 regional offices, including on how to review states’ plans for project feasibility and criteria to ensure that funded projects have characteristics that reflect the greatest likelihood of effective implementation and tangible water quality results. Notwithstanding the variation in regional offices’ reviews of nonpoint source management program plans and annual work plans, regional offices have almost always determined that states have made satisfactory progress in achieving their program goals, which states must do to receive section 319 funding the following year. Regional office officials told us that it is more common for regional office staff to work with states that are at risk for not achieving satisfactory progress than to withhold funds. For example, according to Region 8 officials, the regional office almost withheld its determination of satisfactory progress from Wyoming for not having developed satisfactory watershed-based plans, but it granted the determination instead and then worked with Wyoming state program staff to direct future funds toward developing TMDLs with implementation plans that would include all the elements of satisfactory watershed-based plans. Officials from regional offices reported to us that determinations of satisfactory progress were made according to a variety of factors specific to each state’s program, such as the number of projects completed and reductions in pollutant loads, which states commonly reported in nonpoint source management program reports. Officials with one regional office told us that the determination of satisfactory progress is a fairly low bar and that they were generally reluctant to withhold this determination because states would then not receive funds to address a significant water quality problem. In addition to requiring states to report on their progress in meeting milestones for their nonpoint source management programs, section 319 requires states to annually report to EPA on two measures of effectiveness resulting from implementation of their management programs: (1) reductions in loadings of specific nonpoint source pollutants and (2) improvement in water quality of water bodies identified on states’ lists of impaired waters as requiring nonpoint source controls to meet water quality standards. Section 319 does not limit EPA to these two measures of effectiveness, but the agency has chosen to use these two reporting requirements as barometers of success for the section 319 program. Specifically, EPA provides nationwide data on these two measures to Congress to report on the progress that the section 319 program makes each year toward achieving the program’s goals of (1) eliminating remaining water quality problems and (2) preventing new threats from creating future impairments. Reporting on the two measures of effectiveness is statutorily required, but as described in the 2003 Federal Register guidelines and other EPA documents, states can demonstrate the achievements of nonpoint source management programs in additional ways—ways that in many respects may provide a more accurate picture of environmental outcomes and reflect the achievements of some critical state activities for reducing nonpoint source pollution, such as the number, kind, and condition of living organisms in the water. For the first national performance measure of effectiveness—reductions in loadings of specific nonpoint source pollutants—EPA requires states to provide information on the amounts by which nitrogen, phosphorus, and sediment have been reduced in water bodies where section 319 projects According to several targeting such pollutants have been implemented.state environmental protection officials, EPA’s focus on these reductions as one of two primary reporting requirements has inherent limitations. For instance, it has encouraged some states to design their nonpoint source management programs and select projects to maximize reductions in specific pollutants, but the projects and activities associated with reducing these pollutants do not always address the root cause of a nonpoint source pollution problem that prevents living organisms from inhabiting the water. One example of this limitation is the manner in which states have used section 319 funds to mitigate the effects of channeling streams in areas of agricultural production. Stream channeling occurs when farmers straighten creeks and streams to maximize the amount of land that can be farmed and make it easier to move machinery across fields (see fig. 5). Channeling streams, however, removes vegetation along stream banks, alters streambed configuration and water flows, and disrupts stream food webs and other life-supporting systems, according to EPA documents, and such effects may extend far downstream. Projects to mitigate the adverse effects of channeled streams may yield large pollutant reductions, but because they do not address the straightening of the streams, the goal of healthier water bodies—as measured by biological indicators, such as the number, kind, and condition of living organisms in the water—is not always achieved. For example, to absorb runoff and help reduce loads of nitrogen, phosphorus, and sediment, section 319 funds have often been used for projects that install 20- to 120-foot-wide grass filter strips as a buffer between cropland and adjacent water bodies. These filter strips, however, do not solve the water quality problems caused by the loss of streamside vegetation and altered streambed configuration, according to an engineer with Ohio’s nonpoint source management program. From 2001 to 2005, one midwestern state used section 319 funds to install more than 40,000 feet of grass filter strips, which were estimated to have reduced nitrogen by more than 500,000 pounds and phosphorus by 168,000 pounds annually. Yet none of the watersheds where the grass filter strips were installed demonstrated any measurable improvement in stream health (since they were installed 7 to 11 years ago), as indicated by the number, kind, and condition of living organisms in the water, according to the engineer who monitored the stream. Some state officials mentioned their concerns when this performance measure drives project selection. For example, an official with Iowa’s Department of Natural Resources told us that focusing on this reporting requirement has compelled his state’s nonpoint source management program to select projects that are likely to substantially reduce nitrogen, phosphorus, and sediment, even though such reductions may not address actual causes of stream impairment or improve the condition of water bodies for aquatic life. In addition, an official with Michigan’s Department of Environmental Quality told us that he generally has the flexibility to choose projects that are the most cost-efficient and effective at addressing nonpoint source pollution. To satisfy EPA, however, he told us he feels compelled to use section 319 funds for several projects each year that may not be the most important in addressing nonpoint source pollution problems but that are likely to yield large reductions in pollutants. In contrast, we also found limited instances in which states funded projects (such as stream restoration) that, rather than concentrating on reductions of specific pollutants, sought instead to address the underlying causes of impairment. One project in Ohio in 2011 involved the headwaters of the Big Darby River, which had been channeled more than a century ago to help agricultural producers increase crop yields. The channeling, however, removed streamside trees and shrubs and altered the stream’s flow, thereby increasing water temperatures and reducing the stream’s ability to support life. Furthermore, the officials said that numerous local agricultural activities contributed nitrogen, phosphorus, and sediment to the stream. Project staff did not focus solely on the pollutants used by EPA as key measures; instead, their holistic design focused on restoring the natural configuration and flow of about one mile of the stream’s headwaters. Project engineers told us that when this restoration is completed, the stream will be able to better assimilate pollutants, water temperature will fall, stable stream banks will reduce erosion, and structure and habitat for living things will be restored for approximately 75 downstream miles (see fig. 6). EPA research and guidelines over the past several decades acknowledge the advantages of incorporating biological indicators (e.g., the number, kind, and condition of living organisms) into state water quality programs to better reflect environmental outcomes. In 2005, EPA stated that the most direct and effective measure of the integrity of the water body is the status of its living systems and that the use of biological information can help states improve water quality protection. Moreover, EPA’s 2003 Federal Register guidelines list demonstrable improvements in biological or physical parameters—such as increased diversity in fish or insect populations or improved riparian areas—as a key method for measuring the progress and success of state nonpoint source programs. Nonetheless, despite the advantages of implementing projects that address the varied root causes of water bodies’ impairment and associated downstream effects, EPA data show that state nonpoint source management programs have far more often funded practices that generally reduce pollutant loads than those more directly linked to improving the number, kind, and condition of living organisms in water bodies. For example, the installation of filter strips and other similar practices have been funded more often than stream restoration projects— at least 1,000 projects compared with about 175—since 2004, even though stream restoration projects generally result in more living organisms in the water. For the second national performance measure of effectiveness—the improvement in water quality of water bodies identified on state lists of impaired waters as requiring nonpoint source controls to meet water quality standards—EPA asks states to provide information on the number of water bodies that are removed from these lists. EPA tracks this number to document how states’ efforts are improving water quality across the nation and to demonstrate to Congress the program’s annual progress in reducing nonpoint source pollution. Since 2000, states have removed more than 350 water bodies from their lists of impaired waters. EPA’s focus on this second measure may also influence state project selection in ways that may reduce the effectiveness of their nonpoint source management programs. According to four officials from the eight states we visited, this focus has compelled state program staff to choose projects for water bodies that were close to being restored and removed from the states’ 303(d) lists of impaired waters, rather than solely those that are most degraded. For example, Maryland’s nonpoint source program manager told us that the program’s project selection committee targeted projects for water bodies close to removal from the state’s list of impaired water bodies over water bodies with more serious water quality problems, where such funding could have had a greater impact on nonpoint source pollution. He told us that such projects are selected because EPA expects each state to “deliver” several such successes each year. According to the official, each case has its merits. The former is more likely to result in meeting standards for the particular water body. The latter demonstrates incremental improvement and typically has a greater pollutant load reduction, which benefits downstream waters like the Chesapeake Bay. The emphasis on this measure of effectiveness has also encouraged some state program staff to focus on restoring impaired water bodies even when they have determined that greater benefit could be achieved by protecting high-quality water bodies not yet listed as impaired. Environmental protection officials from several states we visited told us that if EPA put a greater emphasis on protecting high-quality water bodies, they would likely select some different projects on water bodies that are not yet impaired but are threatened by nonpoint source pollution. Instead of focusing solely on EPA’s two performance measures, Maine’s Department of Environmental Protection requested and received permission from EPA to fund projects that focus on preventing pollution of lakes, streams, and coastal waters by, for example, providing training opportunities to advance stream protection efforts. The state’s nonpoint source program’s annual report for 2010 reported that such a focus is far more cost-effective than the long-term investments needed to restore waters once they become polluted. EPA’s former nonpoint source pollution chief acknowledged that EPA’s emphasis on the two statutorily required national performance measures makes it difficult to judge which states are making more progress than others in addressing nonpoint source pollution. He said that protecting undisturbed lakes and streams is critical for protecting aquatic life but that such projects rarely demonstrate substantial reductions in pollutant concentrations. He also said that, conversely, reductions in pollutants are reported each year in some states without associated improvements in biological indicators. EPA does not require states to provide information on their progress under the nonpoint source program in improving water bodies’ condition for aquatic life or the protection of high-quality water bodies. In EPA’s November 2011 program evaluation, the agency reported that it will issue new section 319 guidelines to states in fiscal years 2012 to be implemented in 2012 and 2013. EPA reported that these guidelines will generally address program accountability but did not specify whether such accountability will include measures to more accurately reflect the overall health of targeted water bodies (e.g., the number, kind, and condition of living organisms) or demonstrate states’ focus on protecting high-quality water bodies, where appropriate. USDA’s Environmental Quality Incentives Program (EQIP) is the key agricultural conservation program that can complement EPA’s efforts to reduce nonpoint source pollution. According to the Department’s Natural Resources Conservation Service (NRCS), which manages the program, it has resulted in substantial pollutant reductions in key watersheds across the country.program could adversely affect water quality if installed without the proper suite of companion practices to mitigate these adverse effects. NRCS officials maintain that its procedures ensure that conservation practices conserving one resource (e.g., soil) do not inadvertently harm another (e.g., water), and that its quality control measures ensure they are followed at the ground level. Our analysis of the EQIP data shows that nutrient management plans and other conservation practices of one kind Nonetheless, certain conservation practices under the or another have often been put in place. The EQIP data, however, is kept at an aggregated level and does not reveal which mitigation measures are applied for site-specific conditions on the ground—information which is necessary to determine whether the mitigation measures are effective. Under EQIP, NRCS funds conservation practices throughout the nation that are intended to reduce nonpoint source pollution and, among other things, soil erosion. NRCS has developed standards for each conservation practice, which provide science-based criteria that participating agricultural producers are supposed to follow to address soil, water, air, plant, animal, and energy resource concerns. Each practice’s potential effects on soil, water, and air quality are documented in the agency’s assessments of the conservation practices’ physical effects. Almost one-half the conservation practices can address nonpoint source water pollution, according to officials in NRCS’ Office of Science and Technology. According to NRCS technical documents, some conservation practices could have unintended, negative effects on water quality if installed without the proper “companion practices” capable of mitigating the potential negative effects. This is due to the fact that EQIP-funded practices may have distinct purposes, such as to reduce soil loss or improve soil conditions for agriculture, which are not oriented toward improving water quality. For example, NRCS often funds underground outlet systems, which help move surface water to a “suitable outlet,” such as a drainage ditch, to reduce soil erosion. Such systems can help conserve soil, but NRCS conservation practice physical effects assessments show that such systems can also help transport nutrients (nitrogen and phosphorus) and pesticides from nutrient-laden fields into outlets that in turn feed nearby water bodies. According to officials in NRCS’ Office of Science and Technology, the agency has therefore put procedures in place to help ensure that all resources, including water quality, are protected when EQIP funds are used. These procedures include the following: Environmental evaluation: NRCS’ National Environmental Coordinator told us that the agency requires its field planners to perform an environmental evaluation for every proposed conservation practice as part of NRCS’ process to comply with the National Environmental Policy Act. The environmental evaluation helps field planners identify existing soil and water resources on land where a conservation practice is proposed and to analyze the effects of the proposed practice on the same resources. In short, NRCS field planners are to assess whether the proposed actions for improving one resource (e.g., soil erosion) will negatively affect another resource (e.g., water quality) and document their assessment and determination. Compliance with state plans for impaired waters: With respect to water quality, field planners are to identify whether a proposed practice is on or near a stream listed by the state as impaired. If it is, then field planners are directed to review and comply with any existing pollution limits or watershed plans that have been established by the state. This process asks field planners to ensure that landowners are provided with options to install practices that will not further degrade the stream segment. The final decision on what practices to install ultimately rests with the landowner. Nutrient management planning: In the event of a significant net negative effect on water quality, field planners are to document it, which would trigger an environmental assessment or an environmental impact statement under the National Environmental Policy Act, according to NRCS’ National Environmental Coordinator. He said that significant net negative effects on water quality are rare because landowners instead typically agree to implement other, mitigating practices along with the proposed practice. The agency’s key method to help ensure that the potential negative effect is mitigated is termed nutrient management planning. Nutrient management plans describe a coordinated combination of conservation practices that help farmers manage the amount, form, placement, and timing of fertilizer to support crop production while also minimizing polluted runoff. These plans are site specific and are developed by NRCS field planners and implemented by landowners. Alternative mitigation measures: According to NRCS Science and Technology officials, if landowners choose not to adopt nutrient management plans, NRCS field staff may still work with them to develop alternative mitigation measures—that is, additional conservation practices, often in a suite, to minimize potential adverse effects on water quality. Such mitigation measures may include, for example, planting cover crops on agricultural fields or installing filter strips on field borders. Each of these practices helps reduce runoff of nutrients and pesticides from fields into nearby water bodies by absorbing pollutants in plant root systems. During the course of our field work, we identified instances that raised questions as to whether NRCS’ procedures to protect water quality were always implemented as intended, particularly in watersheds where EPA’s section 319 funds had been used. To ascertain whether such instances were isolated occurrences or indicative of a broader problem, we examined NRCS’ national level data to determine the extent to which NRCS’ procedures intending to ensure that all resources are protected (including water quality) were followed. During the course of our field work, we found some indications that site- specific mitigating practices were not implemented when practices with potentially negative effects on water quality were installed. For example, nonpoint source management program staff in two states told us that NRCS field staff have sometimes funded soil conservation practices without appropriate alternative mitigation measures. Specifically: An agriculture specialist in Washington State’s nonpoint source pollution program told us that NRCS field staff have authorized funding of stream crossings for livestock. The departmental official told us that stream crossings in Washington were installed on certain properties where livestock exclusion fencing prevented cattle from entering the stream. The stream crossings were to be installed for emergency purposes only, in case off-site watering facilities were unable to function, and closed off with gates. He explained, however, that some landowners chose not to implement additional conservation practices, such as off-site watering facilities in places that would keep the cattle away from the stream, and they have kept the gates permanently open, meaning that livestock now enter certain streams more often than they did before installation of the stream crossings.A former agriculture specialist with the program echoed this concern, stating that on land where livestock graze in eastern Washington, it is not uncommon for NRCS conservation practices intended to protect water quality to be implemented in such a way that actually increases the number of livestock entering streams. An engineer with Ohio’s Environmental Protection Agency official told us that NRCS promoted hayland buffer strips to help reduce soil erosion and absorb nutrient runoff as part of a NRCS water quality focused program. The buffer strips were to be 100 to 200 feet wide, located adjacent to a water body, and to remain in place for 3 years. Landowners received $100 per acre each year for installing these buffer strips. But on some properties, riparian vegetation—and the water quality protection it provides—was removed so that landowners could install hayland buffers. According to an engineer with Ohio’s Environmental Protection Agency, hayland buffers provide less overall protection to water quality than wooded riparian vegetation, such as that which was removed. To determine whether the instances we observed when meeting with state officials were isolated examples or indicative of a more prevalent problem, we examined EQIP data on three key parameters: (1) the universe of conservation practices funded by NRCS field units that could have negative water quality effects in watersheds in which Section 319 projects were funded; (2) for that universe of practices, the extent to which nutrient management plans were in place to mitigate unintended adverse effects on water quality; and (3) where nutrient management plans were not in place, the extent to which alternative mitigation practices were in place that could reliably serve that same purpose. Conservation practices that can affect water quality. EQIP data show that from 2005 through 2010, of the 47,000 practices that NRCS field units funded in watersheds where states allocated section 319 funds, nearly 8,000 were types of individual conservation practices that could facilitate agricultural runoff or have other unintended consequences unless other mitigating measures were implemented along with them. These 8,000 practices were funded in about 820 watersheds. Use of nutrient management plans. EQIP data show that nutrient management plans—the agency’s primary method for ensuring that those practices intending to conserve one resource (e.g., soil) do not inadvertently harm another resource (e.g., water)—were funded on properties in less than one-third of the watersheds where soil conservation practices that NRCS acknowledges could degrade water quality were also funded. According to NRCS officials, there are several reasons why the data on nutrient management plans do not provide a complete picture of water protection efforts. For example, it is possible that some of the acres within the 820 watersheds where these practices were funded were actually not vulnerable—that is, were not close to water bodies of concern—and therefore did not require a nutrient management plan to ensure that water quality was protected. In addition, officials told us that nutrient management plans expire after 3 years, so nutrient management plan contracts that expired during the time period of our analysis that could have possibly addressed the practices with potential adverse effects on water quality would not appear in the data. Furthermore, landowners often adopt nutrient management practices well beyond the length of a contract because they recognize the economical and environmental benefits of improved nutrient use efficiency provided by those practices, according to NRCS officials. Use of alternative mitigation measures. As noted above, the absence of a nutrient management plan does not mean that an NRCS conservation practice will adversely affect water quality, if proper alternative mitigation measures are in place. According to NRCS officials, when a conservation practice with a potentially negative effect is identified, and where a nutrient management plan is not in place, conservation planners are alerted so that they know to plan any site-specific mitigating practices to ensure positive outcomes. However, when we examined EQIP data provided by USDA to determine if alternative mitigation measures were funded in the two-thirds of watersheds where nutrient management plans had not been funded, we found the data to be too highly aggregated to allow for a determination as to whether the conservation practices reflected in the data were the appropriate practices that could mitigate site-specific problem. For example, the EQIP data show that the agency funds an average of 4 conservation practices per EQIP contract. This summary information, however, does not shed light on the type of practices that are being installed, whether the combination of those practices have a water quality focus, or whether they are effective in mitigating the potentially adverse effects on water quality of the practices in question. According to NRCS officials, detailed, project-specific information, while not available at the national level, is available in NRCS’ many field offices across the country. NRCS field staff, for instance, are supposed to document their site-specific determinations, which include information on, among other things, how conservation practices are to be implemented in a way that protects all resources. Moreover, the field offices are subject to NRCS’ internal quality assurance processes designed to ensure that all contracts are structured to protect all resources, including water quality, and that projects are appropriately tailored to reflect site-specific conditions.analyze this site-specific information. Without examining such data, however, it is difficult to see how NRCS can assure itself or the Congress that certain practices are not having unintended effects on water quality. The magnitude, pervasiveness, and dispersed nature of nonpoint source pollution make it particularly difficult for states to control. EPA has achieved some notable successes through its section 319 program and, in recent years, has helped states target their nonpoint source pollution reduction efforts in watersheds with the most severe water quality problems. Now more than ever, EPA’s and states’ limited budgets make it critical that the most effective projects are selected for funding. In some cases, however, states used section 319 to fund projects that were not effectively implemented or not clearly linked to tangible water quality results. Our review of the experiences of some states and EPA’s 2011 evaluation report has shown that such issues can potentially be avoided when states use more rigorous project selection processes. EPA’s regional offices can constructively influence the types of projects that states fund through the program, but they are generally not reviewing states’ plans for project feasibility or for project selection criteria that would help ensure that funded projects have characteristics reflecting the greatest likelihood of effective implementation and tangible water quality results. As state programs have evolved over the last decade, some state programs have shown that certain characteristics of proposed projects, such as securing third-party buy-in in advance, can provide greater assurance that these projects will achieve tangible water quality results, although such lessons learned have not been systematically adopted by all states. In addition, EPA’s emphasis on two statutorily required reporting measures as measures of effectiveness—to the exclusion of other measures—may not be fully capturing information reflecting program achievements and may, in some cases, influence state project selection toward narrow measures of nonpoint source pollution over comprehensive results. As a result of EPA’s focus on these primary measures, states are sometimes selecting projects targeted to meet those measures, rather than selecting projects that could have larger impacts on improving the health of impaired or threatened water bodies. At present, certain EPA documents discuss the advantages of additional performance measures that may more accurately reflect the overall health of water bodies, such as conditions for aquatic life, but the agency does not require states to use such measures to provide information on their progress under the nonpoint source program. As a result, most states report on reductions of specific pollutants, rather than on indicators of overall health of targeted water bodies (e.g., the number, kind, and condition of living organisms) or on protection of high-quality water bodies that are not impaired. EPA plans to soon issue new section 319 guidelines to states that generally address program accountability, but it is unclear whether and to what extent these new guidelines will include measures to more accurately reflect the overall health of targeted water bodies or demonstrate states’ focus on protecting high-quality water bodies, where appropriate. USDA’s Environmental Quality Incentives Program has helped to substantially reduce sediment, nitrogen, and phosphorus runoff across the country. By their nature, however, some of the conservation practices supported by the program have the potential to inadvertently conflict with EPA efforts to reduce nonpoint source water pollution. While NRCS procedures strive to minimize such problems, state environmental officials identified instances where these procedures may not always have their intended effect. NRCS has cited highly-aggregated data to demonstrate that a preponderance of mitigating practices—an average of four practices per project—counter any possibility that such unintended effects occur. However, the EQIP data provided by USDA lack the details needed to assess whether these other practices mitigate the potential for negative effects. The most meaningful data on the use and effectiveness of mitigating practices is site-specific information that resides within NRCS’ field offices, and has been neither obtained nor analyzed by NRCS program officials. Tapping and analyzing these data could more accurately inform NRCS, and other interested parties including the Congress, on the extent to which EQIP projects may inadvertently affect water quality in areas where Section 319 funds are used. We are making three recommendations to help protect the quality of our nation’s water resources. To strengthen EPA’s implementation of its responsibilities under the Clean Water Act’s section 319 nonpoint source pollution control program, we recommend that the Administrator of EPA take the following two actions: provide specific guidance to EPA’s 10 regional offices on how they are to fulfill their oversight responsibilities, such as how to review states’ plans for project feasibility and criteria to ensure that funded projects have characteristics that reflect the greatest likelihood of effective implementation and tangible water quality results, and in revising section 319 guidelines to states, and in addition to existing statutorily required reporting measures, emphasize measures that (1) more accurately reflect the overall health of targeted water bodies (e.g., the number, kind, and condition of living organisms) and (2) demonstrate states’ focus on protecting high-quality water bodies, where appropriate. To provide assurance that efforts to conserve soil resources do not work at cross-purposes with efforts to protect water quality, we recommend that the Secretary of Agriculture direct the Chief of the Natural Resources Conservation Service to analyze available information, and obtain necessary information from field offices, to determine the extent to which appropriate mitigation measures are implemented when nutrient management plans are not in use, particularly in watersheds where states are spending section 319 funds. We provided a draft of this report to the Administrator of the Environmental Protection Agency and to the Secretary of Agriculture for their review and comment. EPA provided written comments in an April 16, 2012, letter, in which the agency expressed general agreement with the report’s two recommendations calling for improved and more consistent regional oversight, and for improved and more comprehensive program measures. The letter also cited GAO’s “constructive engagement with the EPA headquarters, the EPA regions, and state nonpoint source control program staff.” It did, however, also question our characterization of several points related to project selection and effectiveness. The letter is included in appendix II along with our responses to the agency’s comments. USDA’s NRCS provided written comments in an April 20, 2012, letter that did not specify whether the agency concurred with our recommendations. The letter acknowledged that we addressed some of the concerns NRCS raised in reviewing an earlier “Statement of Facts” we had provided NRCS officials as a means of verifying the factual information we had planned to use in drafting our report. However, in its letter, NRCS took issue with what it characterized as “several inaccuracies" that remained after the draft report was sent to USDA for comment. Specifically, NRCS identified two, related concerns with the draft report, stating that (1) the message conveys that USDA soil conservation practices have unintended negative impacts on water quality and (2) this inaccuracy appears to be based on misinterpretation and subsequent misuse of a generalized planning tool (Conservation Practice Physical Effects matrix), lack of knowledge of NRCS conservation planning process, and inferences that exceed the limitations of the data on which they are based. Regarding the first concern, our draft report acknowledged the goals and accomplishments of NRCS’ Environmental Quality Incentives Program in mitigating the impacts on water quality of certain agricultural practices. We revised the language in the draft report to further discuss the program’s benefits in response to NRCS comments. That said, our field work identified instances where the program’s goal of mitigating agricultural impacts appeared—on occasion—not to have been carried out at the ground level, and where water quality may have been affected as a result. Addressing NRCS’ second concern, in an effort to ascertain whether instances of ground water quality being affected were anecdotal or more prevalent, we examined USDA data and other information. Specifically, we examined (1) information from NRCS' Conservation Practice Physical Effects matrix; (2) data on EQIP conservation practices funded in watersheds where states had spent section 319 funds; and (3) information on the extent to which “alternative mitigation measures” are used as required, when nutrient management plans are not in use. In examining this information, we concluded that the EQIP data provided by USDA do not contain site-specific information on the extent to which alternative mitigation measures are effectively employed when nutrient management plans are not used. As we state in the report, without examining such information, neither we, nor NRCS, could determine that certain practices are not having unintended effects on water quality. It was for this reason and to ensure that complete data are available to allow NRCS and others to assess whether the program has unintended water quality impacts that we recommended that NRCS analyze available information and obtain necessary site-specific information from field offices as necessary. NRCS would then be in a better position to determine the extent to which appropriate mitigation measures are implemented when nutrient management plans are not in use (and particularly in watersheds where states are spending section 319 funds). We therefore continue to believe this recommendation has merit. NRCS’ letter is included in appendix III, along with our responses to its comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Administrator of EPA, the Secretary of Agriculture, the appropriate congressional committees, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The objectives of our work were to examine (1) states’ experiences in funding projects that effectively address nonpoint source pollution problems, (2) the extent to which the Environmental Protection Agency (EPA) oversees the section 319 program and measures program effectiveness in reducing the adverse impacts of nonpoint source pollution on water quality, and (3) the extent to which key agricultural conservation programs complement EPA’s efforts to reduce nonpoint source pollution. To conduct this work, we reviewed relevant laws, regulations, and agency guidance. In addition, we visited 8 states in four EPA regions: Arkansas, Louisiana, Maryland, Michigan, Ohio, Pennsylvania, Washington, and West Virginia. We chose these states on the basis of their varied types of nonpoint source pollution and proximity to some of the nation’s premier watersheds, such as the Chesapeake Bay, Great Lakes, and Mississippi River. On our site visits, we met with state nonpoint source management program officials, conservation districts, nonprofit organizations, watershed associations, and research officials. We also interviewed agency officials, including officials from EPA, the U.S. Department of Agriculture (USDA), EPA regional offices, the Natural Resources Conservation Service’s Office of Science and Technology, and the Environmental Quality Incentives Program. We also interviewed representatives from the National Association of Conservation Districts, Association of State and Interstate Water Pollution Control Administrators, New England Interstate Water Pollution Control Commission, and state abandoned mine land programs. We discussed with these officials their observations on efforts to reduce nonpoint source pollution and the challenges associated with such efforts. We also interviewed subject-matter experts from industry and academia. To examine states’ experiences in selecting projects that effectively address nonpoint source pollution, we e-mailed a 10-question data collection instrument to nonpoint source program management officials in all 50 states, which solicited information from them on project selection processes, project selection criteria, types of organizations receiving funding, program organization and responsibilities, and program oversight practices. We received responses to this data collection instrument from every state. We also examined summary information, including objectives, methods, and outcomes, for more than 1,500 projects in EPA’s Grants Reporting and Tracking System database. We reviewed this information for all projects in the database that were (1) awarded funding during or after fiscal year 2004 and completed before December 31, 2011; (2) categorized as nonstatewide projects; (3) received section 319 funds; and (4) had complete information on objectives, methods, and outcomes. Before reviewing project summary information and drawing the sample for our survey, we interviewed EPA officials to discuss the reliability of the data contained in EPA’s Grants Reporting and Tracking System; we also checked for outliers and determined that the data were sufficiently reliable for our purposes. In addition, we surveyed a random sample of 524 managers of projects that have been implemented with section 319 funds. The purpose of this survey was to examine topics such as general project information, project proposal and selection, conservation practice selection, project implementation and goals, challenges associated with projects, monitoring and oversight, and funding sources. To identify issues pertaining to section 319-funded projects and to develop the survey questions, we reviewed state annual reports and interviewed headquarters and regional agency officials and subject-matter experts. We selected an initial simple random sample from EPA’s Grants Reporting and Tracking System out of a universe of 1,584 projects. We selected projects that received section 319 funding; were completed between January 1, 2004, and June 9, 2011; and involved implementing conservation practices or remediation techniques, rather than, for example, projects that focused primarily on planning or monitoring. After drawing the initial sample, we removed duplicates and excluded projects that we learned were statewide, because such projects did not always involve implementing conservation practices or remediation techniques. We also excluded projects from North Dakota and South Dakota because some incomplete projects and some that never started in these states were listed as complete in EPA’s Grants Reporting and Tracking System. After obtaining contact information for the sampled projects, we also excluded those projects for which knowledgeable officials were no longer available. After making these adjustments, we estimated that the number of projects meeting our criteria was 1,273. For those project managers who had more than 2 projects sampled, we randomly selected 2 projects for the survey in order to reduce respondent burden. After these adjustments, the final number of projects in the sample was 524. The results of our survey are generalizable to the population of section 319 projects that meet our criteria. That is, they are not generalizable to projects, for example, that did not implement conservation practices or those that were completed before January 1, 2004. The survey was conducted using self-administered electronic questionnaires posted on the World Wide Web. While developing the survey questions, we conducted two rounds of pretests with section 319 project managers over the phone. We conducted six first-round exploratory pretests with managers to help develop the scope of the questionnaire and key concepts. After refining our concepts and questions, we pretested a draft version of the questionnaire with five project managers. We conducted pretests to check that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on agency officials, (4) the information could feasibly be obtained, and (5) the survey was comprehensive and unbiased. We made changes to the content or format of the questionnaire after each pretest according to the feedback we received. A draft of the questionnaire was also reviewed by independent GAO survey experts, and we revised the questionnaire to reflect that review. We contacted survey respondents by sending the survey through an e- mail notification to each. We e-mailed each potential respondent a unique password and username to ensure that only members of the target population could participate in the survey. The survey data were collected from September 2011 through November 2011. We sent follow-up e-mail messages to those who had not responded by the deadline to our original e-mail. We then telephoned all remaining nonrespondents for whom contact information was available, beginning in October 2011. We received a total of 298 responses, accounting for an overall unweighted response rate of 57 percent. Estimates produced from the sample of projects are subject to sampling error. We express our confidence in the precision of our results as a 95 percent confidence interval. This interval would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report includes the true values in the study population. Additionally, to encourage honest and open responses, we pledged in the introduction to the survey that we would report information in the aggregate and not report data that would identify a particular respondent. This report does not contain all the results from the survey; the survey and a more complete tabulation of the results are provided in a supplement to this report (see GAO-12-377SP). To eliminate data-processing errors, we independently verified the computer program that generated the survey results. In addition to tabulating and analyzing the frequencies of survey responses, we conducted a content analysis of all of the open-ended narrative responses received to survey questions 33, 34, and 35. We analyzed the content of the 180 responses to question 33 and the 106 responses to 34 to identify the types of challenges faced and reasons for grant revisions and extensions. Question 33 was coded using the following categories: property access, lack of identified project sites, staff, weather, budget, technical, the National Environmental Policy Act, administrative barriers, coordination, other, exempt, and unclear. Projects coded as exempt included those for which no challenge was encountered or the project was found to be outside of the scope of the sample. Projects were coded as unclear if it was unclear whether a challenge was encountered or if the nature of the challenge was unclear. Question 34 was coded using the following categories: time, budget, location, project scope or specifications, other, exempt, and unclear. Projects coded as exempt included those for which no grant extension was required or the project was found to be outside of the scope of the sample. Projects were coded as unclear if the length of the extension was unclear. Question 34 was coded to reflect the number of months of the grant extension; unclear and exempt categories were also used. We also analyzed the 129 responses to question 35, which was coded to reflect the number of months of the grant extension, including use of unclear and exempt categories. Coding was performed independently by two coders; team members then met to discuss the coding categories and reached consensus on the final coding category assignment for each response. Measures of interrater reliability were calculated before codes were reconciled and found to be sufficiently high for the purposes of this analysis. The numbers of responses in each content category were then summarized and tallied. To examine the extent to which EPA oversees the section 319 program and measures program effectiveness in reducing the adverse impacts of nonpoint source pollution on water quality, we obtained from EPA’s 10 regional offices information on the nature and extent of their oversight of state programs, including the extent to which they examine states’ project selection processes, annual plans, and program objectives and the criteria they use to annually award funds. In addition, we examined section 319’s statutorily required reporting requirements, which EPA uses as national measures of program effectiveness. We evaluated the water quality benefits derived from projects that address these measures, compared with the water quality benefits of projects that address other EPA-approved measures of state program effectiveness, primarily by reviewing EPA documents and interviewing state nonpoint source program officials. We also obtained annual reports from 42 states’ nonpoint source management programs and reviewed 25 of them to determine how they reported the achievements of section 319-funded projects during the most recent fiscal year for which the report was available. To examine the extent to which key agricultural conservation programs complement EPA’s efforts to reduce nonpoint source pollution, we analyzed data on USDA’s conservation practices funded under the Environmental Quality Incentives Program that have been implemented in watersheds where states have allocated section 319 funds. We obtained these data from the Program Contracts System, known as ProTracts, which is used to manage Natural Resources Conservation Service conservation program applications, cost-share contracts, and program funds. We also examined USDA reports on the effectiveness of conservation practices, including those produced by the Conservation Effects Assessment Project. To assess the reliability of this database, we performed electronic data testing for missing data, outliers, and obvious errors. In addition, we interviewed knowledgeable agency officials and compared summary information from the database with published reports. On the basis of this assessment, we determined that ProTracts data were sufficiently reliable for our purposes. We also interviewed USDA officials in the Natural Resources Conservation Service’s Office of Science and Technology. We conducted this performance audit from December 2010 through May 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. 1. EPA stated that our report should better align with the generally positive finding that over 70 percent of watershed projects accomplished all of their originally identified objectives. As EPA suggested, we made several changes to our report to put the 70 percent figure in context. In response to the EPA comment are two additional considerations. First, the fact that 72 percent of projects eventually achieved their goals does not suggest that they all did so in either a timely fashion or without significant complications. For example, as we state in our report, almost half of the projects that achieved their goals did so after encountering significant challenges that prevented them from finishing on schedule, staying on budget, or achieving the desired levels of pollution reduction. Second, we did not intend to arbitrarily identify a specific pass/fail threshold for the success of surveyed projects or intend to suggest that 70 percent is an acceptable or unacceptable share of projects that achieved originally proposed goals. Instead, for those projects that did not achieve their originally proposed goals and for those that did so while encountering challenges, we described the challenges that projects most often faced—and their main causes—to provide information that may assist EPA and the states in developing or modifying project selection criteria or, through other means, better ensure that projects receiving section 319 funds in the future will have a high likelihood of achieving pollution reduction and other project goals. 2. EPA stated that we should not broadly characterize projects that use adaptive management (i.e., projects in which some issues cannot be identified in advance but rather need to be addressed after a project is under way) as facing "preventable challenges." EPA noted in particular that it is not feasible for grant applicants to obtain landowner buy-in in advance of project selection. We acknowledge the difficulty that EPA and the states face in trying to reduce nonpoint source water pollution and protect threatened waters using primarily voluntary methods and that numerous unforeseen factors can affect the success of project implementation. Our survey results showed, however, that the chief reason why some projects did not achieve their originally proposed goals was because third-party buy-in was not secured in advance. Moreover, a key component of EPA’s strategy under the section 319 program is to have states lay the groundwork in advance, using “indirect” projects (e.g., education and outreach activities) to obtain local support so that direct implementation projects can succeed. The experiences of some states, such as Ohio, have shown that where rigorous project selection criteria have been put in place—such as requirements to secure landowner participation in advance—the quality of projects has increased over time (as measured by the ability of project applicants to actually implement their projects as intended), and local partners have been more effective in meeting such requirements. 3. EPA stated it was concerned by our statements about EPA data indicating that section 319 funds paid for conservation practices "to be put in place in isolation" and that it did not believe the analysis supported this conclusion. The draft report had acknowledged that EPA's Grants Reporting and Tracking System (GRTS) data do not provide a complete picture of multi-agency efforts to implement conservation practices. In response to EPA’s comment, we revised the report to further acknowledge that, for the properties reflected in our analysis of GRTS data, complementary practices may have been installed by NRCS, landowners, or others. Notwithstanding the incompleteness or the GRTS data, however, our field work showed that section 319-funded projects to reduce agricultural runoff were ineffective because the proper suites of companion practices were not always installed. We noted that environmental protection officials from several of the states we visited told us that they had often encountered this problem with section 319-funded projects to reduce agricultural runoff. 4. EPA cites a requirement in its guidelines that a watershed-based plan be in place before funding implementation projects for impaired waters, as adding further assurance that in selecting projects, "water quality problems are analyzed at a watershed scale, critical areas are identified, and a suite of necessary practices are identified and implemented through the plan." We acknowledge the guidelines' requirement for a watershed-based plan and its value. Nonetheless, despite the good intentions behind such watershed-based plans, EPA does not have regulatory authority to compel either the implementation of the plans or the suites of practices described within them, particularly on agricultural land. That choice is ultimately left to the landowners within the plans’ geographic areas. Thus, while having a watershed-based plan could help promote positive outcomes, the plan in and of itself is no guarantee that its requirements will be fulfilled. 5. EPA stated that our allusion to the persistence of nonpoint source pollution, "particularly given more than 20 years of funding for the Section 319 program," does not account for the pervasiveness, variety, and magnitude of nonpoint source pollution nationwide relative to the federal funding levels and scope o the section 319 program. We acknowledge the massive scope of the nation's nonpoint source water pollution problem and that its persistence and pervasiveness is not something that EPA alone can be expected to solve with Section 319 funding alone. We have adjusted the text accordingly. At the same time, we believe that the funding levels allocated to the section 319 program are not insignificant and that the improvements suggested in this report can enhance the program’s contributions toward alleviating the nation's nonpoint source pollution problem. 1. NRCS commented on what it referred to as “several inaccuracies" that remained after the draft report was sent to USDA for comment. As discussed in the Agency Comments section of this report, in examining NRCS information, we concluded that the EQIP data provided by USDA do not contain site-specific information on the extent to which alternative mitigation measures are employed when nutrient management plans are not used. We continue to believe, as we state in the report, that without examining such information, neither we nor NRCS could determine that certain practices are not having unintended effects on water quality. 2. NRCS commented on our finding that from 2005 to 2010 the Environmental Quality Incentives Program funded nearly 8,000 practices that—if implemented in isolation—can increase nonpoint source pollution. Specifically, in its letter, the agency stated that “NRCS does not implement practices in isolation. In fact, NRCS analysis of its Environmental Quality Incentives Program data show an average of four practices funded per contract.” We acknowledge that NRCS does not fund practices in isolation and have revised the report to clarify this statement. Nonetheless, our statement is correct in that the nearly 8,000 practices we identified have the potential to have a negative effect on water quality if, depending on site-specific conditions, the proper companion practices are not installed along with them. We note in the report that NRCS generally funds multiple practices per contract—an average of 4, according to NRCS’ letter— but that this figure does not shed light on the type of practices that are being installed and whether the additional practices properly mitigate the potentially negative effects on water quality. It was for this reason that we recommended that NRCS analyze available information, which may involve obtaining site-specific information from field offices, to determine the extent to which appropriate mitigation measures are implemented when nutrient management plans are not in use. 3. NRCS stated that we misinterpreted the agency’s use of its conservation practice physical effects (CPPE) assessments, noting that, “The CPPE matrix is a generalized, national tool that provides a first approximation of potential effects for each practice… Interpretation of the effects of individual practices using the CPPE matrix is not appropriate for assessing overall effects of site-specific conservation plans.” We acknowledge that USDA generally implements suites of practices as part of site-specific conservation plans and have made changes in the report to clarify this point. Nevertheless, as NRCS’ letter states, the CPPE matrix provides an “approximation of potential effects” for each practice and, as such, can give an indication of the kind of effects that can occur if site- specific conservation plans are not implemented as intended. For instance, NRCS’ letter further states, “If a potentially negative effect is identified, then the conservation planner is alerted to this possibility so that they know to plan any site-specific mitigating practices to ensure positive outcomes.” In our field work, we found indications that site- specific mitigating practices were not always implemented when practices with potentially negative effects on water quality were installed. In an effort to determine whether the instances we observed were anecdotal or more prevalent, we then examined EQIP data but found these data not sufficiently detailed to determine if appropriate site-specific mitigation practices were also implemented. 4. NRCS commented on our finding that USDA data showed that nutrient management plans had not been funded for use on properties in about two-thirds of the 800 watersheds, stating that “this information is misleading and inaccurate…Although nutrient management is a frequently used approach for enhancing water quality, it is not the only practice used for mitigating water quality issues.” We disagree that this information is misleading and inaccurate. In meetings in July and August 2011, NRCS officials stated that nutrient management planning was the agency’s primary method for ensuring that those practices intending to conserve one resource (e.g., soil) do not inadvertently harm another resource (e.g., water). As we stated in our draft report, nutrient management plans are the mechanism that field planners often use to ensure water quality is protected on a property in an impaired watershed, and such plans might direct field planners to propose one or more of the 80 conservation practices that can improve water quality, according to NRCS officials. Our draft report also acknowledged that where nutrient management plans are not in place, other conservation planning procedures exist. For example, field planners might propose alternative mitigation measures—which would likely include one or more of the 80 conservation practices that can improve water quality—without requiring a landowner to adhere to a formal nutrient management plan. As we state in the report, because of limitations in the precision of EQIP data, the extent could not be ascertained to which alternative mitigation measures were in place for the roughly two-thirds of watersheds where nutrient management plans were not funded and where the data also showed that practices had been funded that could potentially degrade water quality. 5. NRCS stated that because we did not “consider practices other than nutrient management,” we “understate the number of water quality improving practices that are in place on the landscape.” We are not questioning the number of water quality practices funded by NRCS; rather, it was the limitations of USDA’s data that prevented us, and NRCS, from identifying the full extent to which those water quality improving practices were implemented where they should be. Several state environmental protection officials we spoke with echoed this concern, telling us that because they do not have information on where NRCS funds practices, they do not know whether, and where, they need to use section 319 funds to implement mitigating practices to protect water quality or other practices to complement NRCS efforts. The agency’s summary data showing that “80 percent [of watersheds] had water quality improving practices” does not mean that such practices were always properly implemented with other companion practices to mitigate the effects of those practices that have the potential to degrade water quality. 6. NRCS disagreed with our finding that soil conservation practices may sometimes adversely affect efforts to protect water quality. In addressing this issue above, we acknowledged that, in general, such practices have substantial environmental benefits, but that in certain instances, it is possible that certain conservation practices—including some designed to minimize soil erosion—can negatively affect water quality when the proper companion practices are not also implemented. 7. NRCS provided information showing that under EQIP, the agency rarely implements conservation practices in isolation and instead funds “an average of four practices per contract.” We acknowledge, as previously stated, that NRCS rarely funds practices in isolation and have revised the report to clarify this statement. We also acknowledge that NRCS generally funds multiple practices per contract—an average of four, according to NRCS’ letter—but this number alone does not shed light on the type of practices that are being installed and whether the additional practices mitigate the potentially negative effects on water quality in the watersheds we analyzed. 8. NRCS commented on how we used information in its conservation practice physical effects assessments, asserting that “The CPPE matrix is simply a very rough first approximation of effects that is most often used as a training tool … and is not the only effects assessment tool used during site-specific conservation planning.” We acknowledge, as we have above, that the CPPE matrix is not the only tool used during site-specific planning and that field planners generally propose “site- specific mitigating practices to ensure positive outcomes,” according to NRCS’ letter. Nevertheless, in our field work we found some indications that site-specific mitigating practices were not implemented when practices with potentially negative effects on water quality were installed. As noted above, it is the limitations of the EQIP data available to NRCS in illuminating whether such instances were isolated occurrences, or indicative of a broader issue, that led us to recommend that NRCS analyze available information, which may involve NRCS obtaining site-specific information from field offices, to determine the extent to which appropriate mitigation measures are implemented where nutrient management plans are not in use, particularly in watersheds where states are spending section 319 funds. 9. NRCS stated that we misinterpreted the agency’s CPPE matrix in describing underground outlet systems. This information, however, came directly from NRCS field office technical guides and other NRCS documents. We nonetheless added information, as NRCS suggested in its letter, to clarify that underground outlets “allow water to move more quickly and thus reduce the time that natural processes have to reduce nutrients or other agricultural chemicals.” 10. NRCS stated that our example of stream crossings in Washington state is an exception and that “miles of exclusion fencing are contracted every year to keep livestock out of streams, and stream crossings are appropriately utilized to help attain those water quality benefits.” We acknowledge that the example from Washington State may well be an exception. Nevertheless, because of limitations in the availability of data, neither we nor NRCS could determine whether examples such as these were exceptions or more prevalent. As we stated earlier, it was for this reason, and to ensure that complete data are available to allow NRCS and others to assess whether the program has unintended water quality impacts, that we recommended that NRCS analyze information on the extent to which mitigation measures are implemented in situations where NRCS-funded conservation practices may negatively affect water quality (and particularly in watersheds where states are spending section 319 funds). 11. NRCS stated that our statement that 17 percent of conservation practices were types of individual conservation practices that could degrade water quality was misleading and conveyed an inaccurate message. Specifically, NRCS stated that this finding is (1) based on misuse of the CPPE matrix, and (2) does not report the number of practices actually installed in isolation. To respond to NRCS’ first point, as we stated above, we acknowledge that the CPPE matrix is not the only tool used during site-specific planning, but in our field work we found some indications that site-specific mitigating practices were not implemented when practices with potentially negative effects on water quality were installed. To respond to NRCS’ second point, because of limitations in the precision of EQIP data, we could not ascertain the extent to which alternative mitigation measures were in place for the roughly two-thirds of watersheds where nutrient management plans were not funded and where the data also showed that practices had been funded that could potentially degrade water quality. Upon our request for information on alternative mitigation measures, NRCS officials told us that such information was documented and stored at the field level but that such information was not catalogued and available at the headquarters level. 12. NRCS stated that we misinterpreted and misused the CPPE matrix. As we stated above, NRCS’ letter states that the CPPE matrix provides an “approximation of potential effects” for each practice and, as such, can give an indication of the kind of effects that can occur if site-specific conservation plans are not implemented as intended. In our field work, we found indications that site-specific mitigating practices were not always implemented when practices with potentially negative effects on water quality were installed. 13. NRCS stated that we misinterpreted USDA statistics on the use of nutrient management plans and “violated a basic tenet of statistical analysis by inferring a conclusion that far exceeds the limitations of the data on which it is based.” We acknowledge, as we did in our draft report, that where nutrient management plans are not in place, other conservation planning procedures exist. Because of limitations in the precision of EQIP data, however, the extent could not be ascertained to which alternative mitigation measures were in place for the roughly two- thirds of watersheds where nutrient management plans were not funded and where the data also showed that practices had been funded that could potentially degrade water quality. Upon our request for information on alternative mitigation measures, NRCS officials told us that such information was documented and stored at the field level, but that such information was not catalogued and available at the headquarters level. Rather than drawing “inferences that exceed the limitations of the data on which they are based,” as NRCS stated, we concluded that neither we, nor NRCS, could draw such inferences without additional information on the extent to which alternative measures are employed—or if site-specific conditions make them unnecessary—on these EQIP contracts when nutrient management plans are not used. 14. NRCS stated, “In summary, it is clear that this assertion is based on misinterpretation and subsequent misuse of the CPPE matrix, a lack of knowledge of the NRCS conservation planning process, inferences that exceed the limitations of the data, and a disregard for university and Federal research findings.” We disagree. Our draft report acknowledged the noteworthy goals and accomplishments of EQIP in mitigating the impacts on water quality of certain agricultural practices. In response to the NRCS comments, we revised the language in the draft report to further discuss the program’s benefits in finalizing the report. However, we found instances where NRCS conservation planning process did not mitigate certain practices’ effects at the ground level, and where water quality had been impacted as a result. In an effort to ascertain whether the instances we observed were anecdotal or more prevalent, we examined USDA data and other information, including the CPPE matrix, and concluded that neither we nor NRCS could draw such inferences without additional information on the extent to which alternative measures are employed on Environmental Quality Incentives Program contracts when nutrient management plans are not used. It was for this reason—and to ensure that complete data are available to allow NRCS and others to assess whether the program has unintended water quality impacts—that we recommended that NRCS analyze available information and obtain necessary site-specific information from field offices as necessary. NRCS would then be in a better position to determine the extent to which appropriate mitigation measures are implemented when nutrient management plans are not in use (and particularly in watersheds where states are spending section 319 funds). 15. NRCS provided additional information on EQIP’s water quality benefits, specifically requesting that we include information from published studies on the benefits of agricultural conservation practices. The draft report cited EQIP’s contributions toward improving water quality, but we nonetheless added language as suggested on the program’s pollutant reductions in key watersheds and on NRCS reports describing the agency’s water quality improvement efforts. In addition to the individual named above, Steve Elstein, Assistant Director; Nathan Anderson; Elizabeth Beardsley; Mark Braza; Ellen Chu; Emily Eischen; Mitch Karpman; Jill Lacey; Dae Park; Kiki Theodoropolous; Jason Trentacoste; and Josh Wiener made key contributions to this report.
Pollution from nonpoint sources—such as runoff from farms or construction sites—remains the leading cause of impairment to the nation’s waters. Under section 319 of the Clean Water Act, each year EPA provides grants to states to implement programs and fund projects that address nonpoint source pollution; the program received $165 million in fiscal year 2012. Section 319 includes minimum conditions that states must meet to receive grants. By regulation, EPA’s 10 regional offices oversee state programs and are to ensure that states’ projects can be feasibly implemented. USDA also has programs to protect water resources. GAO examined (1) states’ experiences in funding projects that address nonpoint source pollution, (2) the extent to which EPA oversees the section 319 program and measures its effectiveness, and (3) the extent to which key agricultural programs complement EPA efforts to control such pollution. GAO surveyed project managers, reviewed information from EPA’s 10 regional offices on oversight of state programs, and analyzed USDA data. Under section 319 of the Clean Water Act, state-selected projects to reduce nonpoint source pollution have helped restore more than 350 impaired water bodies since 2000, but other projects have encountered significant challenges. According to GAO survey results, 28 percent of projects did not achieve all objectives originally identified in the project proposal (e.g., implementing the desired number of pollution reduction practices), while many that did so still faced considerable challenges. About half such challenges were beyond staff control (e.g., bad weather or staff turnover), but the other half were challenges that generally could have been identified and mitigated before projects were proposed and selected for funding, such as gaining access to desired properties. In one state, for example, $285,000 in section 319 funds was to subsidize the cost to homeowners of repairing damaged septic systems. Once the grant was awarded, however, one homeowner signed up to participate. The Environmental Protection Agency’s (EPA) oversight and measures of effectiveness of states’ programs have not consistently ensured the selection of projects likely to yield measurable water quality outcomes. EPA’s 10 regional offices varied widely in their review of states’ work plans, which describe projects states plan to undertake in the upcoming year, and project selection criteria, which identify eligibility parameters for receiving section 319 funds. For example, three regional offices reported reviewing annual work plans in depth and actively influencing the types of projects selected, while three others reported limited to no involvement in such reviews, instead deferring to states’ judgment on project feasibility and selection. EPA, however, has not provided its 10 regions with guidance on how to oversee the state programs. Also, EPA’s primary measures of program effectiveness may not fully demonstrate program achievements. Section 319 requires states to report to EPA on two measures, including reductions in key pollutants. It does not limit EPA to these two measures, but the agency has chosen to use them as barometers of success for the section 319 program. States can demonstrate their achievements in additional ways—ways that may provide a more accurate picture of the overall health of targeted water bodies, such as the number and kind of living organisms in the water. USDA’s Environmental Quality Incentives Program is the key agricultural conservation program that can complement EPA efforts to reduce nonpoint source pollution, and its conservation practices have significantly reduced pollutants coming from agricultural land across the country. Notwithstanding its achievements, certain conservation practices can adversely affect water quality if not properly implemented—for example, by transporting polluted runoff from nutrient-laden fields into nearby water bodies. The agency’s Natural Resources Conservation Service (NRCS) has procedures in place intended to ensure that its practices do not inadvertently harm water quality. During its field work, GAO identified a few instances where these procedures may not have been followed (including in watersheds where EPA’s section 319 funds had been used), and therefore sought NRCS data to determine if they were isolated instances or indicative of a more prevalent issue. NRCS’ national level data, however, are not sufficiently detailed to identify whether appropriate measures are always in place to mitigate potential water quality impacts. According to NRCS, such data are instead located in its field offices and are not analyzed by the agency. GAO recommends, among other things, that EPA provide section 319 oversight guidance to its regional offices and that USDA analyze data to determine if measures were taken to mitigate water quality impacts in section 319 project areas. EPA agreed with the recommendations, while USDA was silent on them. Both agencies commented on specific findings, which are addressed within the report.
IRS has 10 submission processing centers located throughout the country that are responsible for processing paper returns, 5 of which also process electronic returns. Electronic returns are relatively easy to process, while the processing of paper returns involves several additional steps, as shown in figure 1. The fewer steps involved in processing electronic returns rather than paper returns relate to a cost avoidance experienced by the IRS. In response to a question raised by the House Appropriations Committee in 2001, IRS estimated that 50 million individual income tax returns would be filed electronically in fiscal year 2002. IRS estimated that it would need 3,150 more full-time equivalent staff years if none of those returns were filed electronically. At IRS’ estimate of $36,300 per staff year, that would be a cost avoidance of $114.3 million. Therefore, if no returns were filed electronically and using IRS’ estimates, which we did not verify, IRS’ fiscal year 2002 budget request of $615 million for submission processing would have increased to about $729 million. The major focus of our review was on factors that worked against an even greater reduction in submission processing costs. To address our objectives, we interviewed IRS National Office officials and officials at 2 of the 10 submission processing centers—Atlanta and Cincinnati—to obtain their opinions about any factors that limited the impact of electronic filing on the amount of resources devoted to processing returns from fiscal years 1997 through 2000. We reviewed IRS documents and GAO reports that contained information related to these factors. We analyzed a report prepared for IRS by the consulting firm of Booz-Allen & Hamilton about future prospects for cost reductions in submission processing and obtained IRS officials’ opinions on that subject. We performed our work between January and October 2001 in accordance with generally accepted government auditing standards. We discuss our scope and methodology in greater detail in appendix I. Several factors limited the impact of electronic filing on the resources devoted to processing returns from fiscal years 1997 through 2000. These factors fell into two broad categories—filing trends that partially offset the potential savings from increases in electronic filing and expanded demands on paper processing staff. Even though the number of electronic returns filed from 1997 though 2000 increased, the potential savings from that increase were partially offset by the following filing trends: The increase in electronically filed returns was partially offset by an increase in total returns filed. The number of the most complex individual income tax returns filed on paper—standard Form 1040s—essentially stayed the same. The number of paper individual income tax returns received by IRS during the peak filing period stayed relatively the same from 1997 through 2000, and peak processing needs drive the resources needed to process individual paper returns. About 17.9 million more individual and business tax returns were filed electronically in 2000 than in 1997. However, as shown in table 1, because of an overall increase in the total number of returns filed from 1997 through 2000, the net decline in paper returns over that period was much less than the 17.9 million net increase in electronic filings. Thus, the increase in electronic returns had less of an impact on processing costs than might have been expected because any savings from that increase would be partially offset by the costs to process the overall increase in returns filed. From 1997 through 2000, the number of complex individual returns filed on paper essentially remained the same. The complexity of a return varies according to the form on which it is filed. Complexity is determined by the number of lines of data that need to be entered on a form. According to IRS submission processing officials, the standard Form 1040 is the most complex. Complexity then decreases from the Form 1040 to the Form 1040A, and finally to the Form 1040EZ. The more complex a return, the longer it takes to process and the greater the processing costs. For example, according to data developed for IRS by a consulting firm for fiscal year 1999, the average direct labor cost to process a Form 1040 filed on paper was $1.93 compared to $1.50 to process a paper 1040A and $1.01 to process a paper 1040EZ. As shown in table 2, the number of Form 1040s filed on paper only decreased by about 1 percent from 1997 through 2000, with the only decrease occurring between 1999 and 2000. The reductions in paper 1040As and 1040EZs were much larger during the years covered by our study—15 and 23 percent, respectively. Any reductions in processing costs that IRS may have been able to realize as more taxpayers filed electronically depended, in great part, on the cost of processing those same returns filed on paper. IRS would have been able to reduce costs more if a greater number of taxpayers who were filing the more complex (and thus more costly to process) returns on paper had started filing electronically. Another filing trend that limited the impact of electronic filing on processing costs was the increase in the number of paper returns filed by individuals during the peak filing period—the 2 weeks of the year when the most individual income tax returns are filed. The peak filing period for paper returns filed by individuals is mid-April. Business returns do not experience the same peak phenomenon. Businesses have various fiscal years, which affect their filing period. In addition, many business returns must be filed quarterly. As shown in table 3, while the overall number of individual returns filed on paper decreased from 1997 through 2000, the number of paper returnsfiled during the peak period stayed relatively the same. The number of paper individual returns received during the peak filing period drives the amount of resources needed to process individual paper returns. According to the Director of Submission Processing, when Submission Processing determines its resource needs, the first priority is the resources (including staff, equipment, and space) needed during the peak period. The Director added that, all things considered, if the number of individual paper returns received during the peak period increases while the total number of paper returns received during the entire year decreases, the increase during the peak period would have more of an impact on submission processing resources than would the overall decrease in paper receipts. IRS’ goal of improving business results also directly affects the resources needed during the peak filing period. To help achieve this goal, in 2000, 85 percent of refund checks for paper returns were to be processed within 40 days. Doing so also contributes to IRS’ goal of improving taxpayer satisfaction. Thus, to meet these goals, IRS has to ensure that there are enough resources to process the increased number of peak period returns within this time frame. Another factor that limited the impact of electronic filing on the resources devoted to paper processing was the increase in demands placed on paper processing staff from 1997 through 2000. These increased demands included the following: Numerous processing changes increased the workload for units responsible for (a) reviewing returns for completeness and coding them for data entry, (b) transcribing data, and (c) correcting errors. Because most electronic filers submitted a paper signature document, the work done by paper processing staff was not totally eliminated when taxpayers filed electronically and the volume of that work increased as electronic filing increased. Front-line employees spent increasing amounts of time on activities, including training, not specifically related to processing returns. Numerous changes were made in the processing of returns from 1997 through 2000, which according to IRS officials, resulted in an increased workload. For paper processing staff, these changes generally increased the amount of time spent reviewing returns and coding them for data entry, number of keystrokes entered, and number of IRS and taxpayer errors to be corrected. Table 4 illustrates the estimated effects of some of these changes according to IRS’ data. Some processing changes, such as the validation of secondary Social Security numbers, were made to help ensure compliance with the tax law. Other changes stemmed from changes in the tax law that established new credits and deductions for which IRS had to enter data into its computer system. Although the numbers of additional seconds and keystrokes cited in table 4 for any one change are small, the overall effect of these processing changes, considering the number of returns involved, is to increase the number of staff years needed to process returns. For example, the additional second needed to review and code 41.6 million returns for secondary Social Security number validation equates to about 11,556 hours or (on the basis of 2,088 hours per staff year) 5.5 staff years. Similarly, a total of about 584.5 million additional keystrokes would have had to be made to process the four changes in table 4, which, we roughly estimated using IRS data on average keystrokes per hour, would consume at least 78,000 additional hours or 37.4 staff years at a cost of almost $1.4 million.Although these changes in workload may not be of great magnitude, they required additional resources that offset some of the potential savings from electronic filing. The workload of error correction staff can also be affected by changes in the accuracy of work done by other processing staff. In that regard, the accuracy of staff who reviewed and coded tax returns for transcription increased from 95 percent in 1997 to 96.6 percent in 2000, while the accuracy of data transcribers decreased from 94.7 percent to 93.9 percent. We do not know how much, if at all, the volume of error correction work actually changed as a net result of these increases and decreases in accuracy. The increase in responsibilities can also affect the staff’s productivity. Using IRS information on average keystrokes per hour for various tax forms as a measure of data transcribers’ productivity, table 5 shows that there was a general decline in productivity in 1999, which is when transcribers began using a new computer system, and a general improvement in productivity in 2000. For example, IRS data for Other- Than-Full-Paid Form 1040s showed that the average keystrokes per hour went from 7,503 in 1997 to 7,250 in 1998 and 6,802 in 1999 before rising to 7,108 in 2000. Submission Processing officials said that many factors affected accuracy and productivity and that it would be difficult to determine specifically what caused them to decrease. However, they believed that the learning curve associated with using a new computer system in 1999 was probably the major contributing factor to the decrease in data transcribers’ productivity. They added that there has been high turnover in the Submission Processing Centers for the past few years due to the availability of higher paying jobs elsewhere within IRS or in the private sector. As a result, they have less experienced staff, which may have contributed to lower accuracy and productivity rates. During the years covered by our study, electronic filing was not entirely paperless. Most electronic filers continued to submit a paper signature document, even though in 1999, IRS began testing electronic options to replace the document. Thus, any savings IRS realized when taxpayers switched to electronic filing were partially offset by the costs incurred in processing the increase in the volume of paper signature documents that resulted from the increase in electronic filing. Before 1999, individual taxpayers who filed electronically had to submit a paper signature document that was processed by the staff who processed paper returns. Beginning in 1999, IRS provided two options that could be used in place of submitting a paper signature document. In 2000, about 6.8 million (or about 19 percent) of the 35.4 million taxpayers who filed their individual income tax returns electronically used one of those options. However, that meant that IRS still had to process about 28.6 million paper signature documents. According to a March 2000 study prepared for IRS by a consulting firm, it cost IRS $0.26 in direct labor costs to process each paper signature document in 1999. Assuming that same rate in 2000, it would have cost IRS about $7.4 million in labor costs to process the 28.6 million signature documents. Front-line paper processing employees spent greater amounts of their time on activities not specifically related to processing returns in fiscal year 2000 than they did in 1997. The Submission Processing Director and the Processing Division Branch Chiefs at the Atlanta and Cincinnati Submission Processing Centers said that personnel were spending more time (1) in required training not related to processing returns and (2) on required activities related to the Employee Satisfaction Survey. Some of the required training, such as training about the circumstances under which IRS employees can be charged with misconduct and terminated, was provided in order to apprise staff of new statutory requirements. IRS plans to use results from the Employee Satisfaction Survey to improve operations. According to the Branch Chiefs, these activities, while important, reduced the amount of time that employees were able to devote to processing returns. According to data in IRS’ Work, Planning, and Control System, the amount of time paper processing staff spent on all training, including training related to processing returns, and on actions related to the Employee Satisfaction Survey increased from fiscal years 1997 through 2000. The percentage of time spent on these activities grew from 7.8 percent to 9.9 percent. Because IRS records did not separately identify all training related to processing and nonprocessing activities, it was not possible to determine the change in the amount of time spent in nonprocessing­ related training. Data in the Work, Planning, and Control System also showed that the number of hours submission processing staff spent on activities related to the Employee Satisfaction Survey increased from about 12,000 in fiscal year 1997 to almost 96,000 in fiscal year 2000. According to the Submission Processing Director, finding the time to spend on nonprocessing related training and the Employee Satisfaction Survey, both of which were required, was more difficult for Submission Processing than other units in IRS. This was because some units could absorb these activities by doing less direct work, such as opening fewer collection cases. Submission Processing, on the other hand, could not process fewer returns, so any additional required activities meant working more overtime, keeping seasonal employees longer, or hiring more employees than originally planned, resulting in an increase in costs. The Director added that the Employee Satisfaction Survey was completed during the peak filing period to help ensure that IRS obtained the views of seasonal staff. According to a report prepared for IRS by a national consulting firm, future reductions in processing costs are possible, with the amount of any reduction dependent on the nature and extent of future increases in the number of returns filed electronically and changes in submission processing’s operations. Whether these reductions are realized will depend not only on the actual number of returns filed electronically and the extent to which different operational changes are implemented, but also on the extent of any changes in the workload of paper processing staff due to tax law changes or increased IRS compliance efforts. In a March 2000 report prepared for IRS, the national consulting firm of Booz-Allen & Hamilton analyzed how various scenarios might affect IRS’ processing costs starting in 2007. The firm developed eight scenarios that involved a growth in volume of returns and a growth in electronic filing of individual, business, and other types of forms, as well as several operational changes, such as making additional business forms available for electronic filing and consolidating submission processing centers. The firm also developed four cost-reduction estimates for each scenario based on differing percentages of electronic filing for individual, business, and other returns. Those cost-reduction estimates ranged from $27 million to $243 million. Figure 2 shows that the firm’s estimates when using the highest electronic filing projections—80 percent for individual returns, 45 percent for business returns, and 30 percent for other returns—ranged from $104 million to $243 million. 80 percent of individual returns filed electronically. 45 percent of business returns filed electronically. 30 percent of supplemental and information returns filed electronically. Eliminate scanning and filing by telephone. The estimates in figure 2 assume that IRS will meet its goal of having 80 percent of all individual income tax returns filed electronically by 2007. However, our assessment of IRS’ 2001 tax filing season in response to another request from this Subcommittee showed that (1) about 31 percent of individual income tax returns were filed electronically in 2001 (through October 26, 2001) and (2) fewer individuals filed electronically in 2001 than IRS had projected (40 million filed vs. 42 million projected). With 2001 as a starting point and assuming that the total number of individual income tax returns filed and the number of such returns filed electronically each continue to grow at the same annual rate as achieved between 2000 and 2001 (1.85 percent and 13.7 percent, respectively), we projected that only about 60 percent of individual income tax returns would be filed electronically in 2007. Using the estimates in the consultant’s March 2000 report for a 65-percent level of individual electronic filing, the cost reductions would range from $74 million to $170 million annually in 2007, or about 30 percent less than at the 80- percent electronic filing level for individuals. The consultant’s report focused on reductions that could be realized by making specific changes related to processing returns and not on potential increases in the type and amount of data on paper returns that IRS would need to process due to tax law changes or enhanced compliance efforts.Consequently, any reductions in overall processing costs would depend on the level of any such increases. In that regard, IRS made at least one significant change in submission processing’s workload in 2001 that increased costs. IRS’ 2001 budget included 378 additional full-time equivalent staff years in submission processing for transcribing Schedule K-1s (Beneficiary’s, Partner’s, or Shareholder’s Share of Income, Deductions, Credits, etc.). IRS plans to compare the transcribed K-1 information to that reported on the tax returns filed by beneficiaries, partners, and shareholders to determine if income was accurately reported. The Director of Submission Processing told us that the cost reductions in the consultant’s study may also be overstated because the study did not consider the resources needed to process returns during the peak filing period. The consulting firm official responsible for developing the data in the study said that the maximum cost reductions included in the study would not be affected by peak filing period resource needs, because the reductions were based on the assumption that 80 percent of individual taxpayers would file electronically. To achieve that level of electronic filing, the number of returns filed on paper would have to decrease significantly from the fiscal year 2000 levels previously described in this report. Once this happens, fewer resources would be needed to process paper returns during the peak filing period. The official added that at some lower percentage of electronic filing, peak period filing needs would affect possible cost reductions, but he did not know what that level would be. The Commissioner of Internal Revenue provided written comments on a draft of this report in a December 17, 2001, letter which is reprinted in appendix II. The Commissioner said that our report provided useful explanations for the continued increase in submission processing costs, despite the increase in the number of electronically filed returns. At his suggestion, we revised the report to clarify the objective of the March 2000 consultant’s study. The Commissioner also suggested that we revise the report to acknowledge the steps IRS has taken to reduce the processing costs associated with electronic filing, specifically with respect to the paper signature document. Our report recognizes the steps IRS has taken to enable electronic filers to sign their returns electronically. However, most electronic returns were still filed with paper signature documents. Of the about 40 million returns filed electronically in 2001, about 9 million were filed using an electronic signature. The other about 31 million returns were filed using a paper signature document—an increase of about 2.4 million returns compared to 2000. Using the direct labor cost included in the March 2000 consultant’s study for processing paper signature documents--$0.26 per document—it cost IRS about $624,000 more in 2001 than in 2000 to process these documents. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Finance and the House Committee on Ways and Means and to the Ranking Minority Member of this Subcommittee. We are also sending copies to the Secretary of the Treasury; the Commissioner of Internal Revenue; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others on request. This report was prepared under the direction of David J. Attianese, Assistant Director. If you have any questions about this report, please contact me or Mr. Attianese on (202) 512-9110. Key contributors to this report were Julie Schneiberg, Margaret Skiba, and Shellee Soliday. Our first objective was to determine what factors, if any, limited the impact of electronic filing on the resources devoted to processing paper returns. To address this objective, we interviewed several Internal Revenue Service (IRS) officials responsible for submission processing and electronic tax administration. We also visited 2 of IRS’ 10 submission processing centers, including 1 that had an electronic filing unit— Cincinnati—and 1 that did not have an electronic filing unit—Atlanta. At both centers, we interviewed the Center Directors and several Processing and Post Processing Division officials. These divisions have primary responsibility for processing returns. At Cincinnati, we also interviewed the Lead Tax Examiner in the Electronic Filing Unit to obtain details about the Unit’s role in electronic return processing. We selected Cincinnati from among the five centers that had an electronic filing unit because Submission Processing’s Monitoring Section was located there and officials would be able to provide information related to processing both paper and electronic returns. We selected Atlanta from among the five centers that did not have an electronic filing unit because it was convenient to our audit staff. Because these two centers were judgmentally selected, our results cannot be projected to all 10 centers. However, the Director of Submission Processing said that the opinions provided by officials at these two centers would be representative of the opinions that would be provided by officials at the other eight centers. To further address the first objective, we analyzed several studies prepared either by or for IRS, including a consulting firm’s study of the costs to process electronic returns. We analyzed available IRS statistics related to several topics, including training, filings by type of return, the number of keystrokes associated with new data to be entered into the computer by data transcribers, and average keystrokes per hour. We also reviewed our past reports to obtain information about the accuracy of work done by paper processing staff. Our work on the first objective focused on fiscal years 1997 through 2000. We selected this 4-year period because (1) at the time we began our review, fiscal year 2000 was the last complete year for which data were available and (2) we wanted data for enough years before 2000 to be able to analyze trends. We decided that a total of 4 years would provide sufficient trend data. We included fiscal year 2001 data about the peak filing period and the number of individual returns filed electronically because it was readily available. Our second objective was to determine the prospects for future reductions in submission processing costs. We interviewed the Director of Submission Processing, reviewed the previously referred to report on costs to process electronic returns, and interviewed the consulting firm official who had responsibility for developing the data in the report. This report presented eight scenarios involving a growth in the volume of returns filed and a growth in electronic filing and included estimates of the cost reductions that IRS would realize under each scenario. The scenarios included different combinations of several variables, including increases in electronic filing by individual or business taxpayers, elimination of the paper signature document, and increases in the number of business forms that can be filed electronically. We also reviewed information that IRS provided to the House Appropriations Committee in June 2001 on the number of additional full-time equivalent staff years IRS would need to process returns if all returns were filed on paper. We performed our work between January and October 2001 in accordance with generally accepted government auditing standards. We obtained written comments from the Commissioner of Internal Revenue on a draft of this report. The comments are discussed near the end of this report and are reprinted in appendix II.
From fiscal years 1997 through 2000, the number of individual and business tax returns filed electronically increased from 23 million to 41 million. During the same period, the Internal Revenue Service's (IRS) expenditures for submission processing grew from $795 million to $924 million, an increase of 16 percent. Because it costs less to process an electronic return than a paper return, a growth in processing costs seemed improbable. Interviews with IRS officials and an analysis of relevant documentation identified several factors that limited the impact of electronic filing. Specifically, (1) the overall number of individual and business tax returns filed increased, and the resources needed to process that increase partially offset the resources saved by processing more electronic returns; (2) the number of the most costly to process individual income tax returns filed on paper essentially stayed the same; and (3) the number of individual income tax returns filed on paper and received during the peak filing period stayed relatively the same, and peak processing needs drive the resources needed to process individual paper returns. Although electronic filing increased, so did the demands placed on paper processing staff. In particular, (1) processing changes increased the workload for units responsible for reviewing returns for completeness and coding them for entry data, and correcting errors; (2) because most electronic filers still sent a paper signature document to IRS, the work done by paper processing staff was not entirely eliminated when taxpayers filed electronically; and (3) front-line paper processing staff spent increasing amounts of time on activities, including training, not specifically related to processing returns. Future reductions in processing costs as a result of electronic filing are possible.
In conducting our review, we reviewed and analyzed various DOD computer center consolidation plans and reported costs and assessed how these plans met OMB’s Bulletin 96-02 requirements. We also compared DOD plans and practices to the practices and strategies employed by private-sector companies we visited during our review that have successfully consolidated computer centers. In addition, we met with consultants who advise computer center managers on improving services and with the General Services Administration’s Office of Governmentwide Policy and Federal Systems Management Center. We conducted numerous interviews with DOD officials to discuss their approach to consolidating and modernizing computer centers. We also discussed with OMB officials the OMB Bulletin governing computer centers and their views on DOD responses. Details of our scope and methodology are included in appendix I. We did not validate the accuracy of the information provided by DOD on the numbers and costs of computer centers, the alternatives analyses, funding plans, and processing capacities. Our work was performed from March 1996 through January 1997 in accordance with generally accepted government auditing standards. The Department of Defense provided written comments on a draft of this report. These comments are presented and evaluated at the end of this letter and are reprinted along with our more detailed evaluation in appendix II. The Office of Management and Budget provided oral comments on a draft of this report which are incorporated in the report as appropriate and discussed at the end of this letter. The federal government owns hundreds of computer centers that perform such services as processing agency software programs, providing office automation and records management, and assisting in the management of wide area computer networks. In recent years, the federal government has recognized that most of these centers operate below optimum capacity, use outdated technology, and perform redundant services. It has concluded that it can achieve significant dollar savings and operational efficiencies by consolidating computer centers or by acquiring its information processing services from the private sector. In 1993, the Vice President’s National Performance Review recommended that the federal government take advantage of evolving technology and begin consolidating and modernizing its computer centers to reduce the duplication in information processing services and decrease information processing costs. To help implement this recommendation, a committee formed by the Council of Federal Data Center Directors recommended that the Office of Management and Budget establish operating capacity targets for the consolidated centers and that federal agencies follow an approach successfully used by private-sector companies and other government agencies to plan, implement, and optimize their own computer centers. The Committee’s recommendations formed the basis of OMB guidance to promote computer center improvements and consolidations, which was issued in October 1995. This guidance, OMB Bulletin 96-02, Consolidation of Agency Data Centers, called on agencies to (1) reduce the number of their computer centers, (2) collocate small and mid-tier computer platforms in larger computer centers, (3) modernize their remaining centers in order to improve the delivery of services, and (4) outsource information processing services to other federal or commercial computer centers when aggregate computer center capacities were below minimum target sizes. Table 1 lists OMB’s specific requirements. OMB allowed agencies considerable discretion as to which data centers they chose to retain and close so long as their consolidation scenario was cost-effective and minimal data center target sizes were met. The target sizes OMB set were based on a standard industry measure for information processing: millions of instructions per second, or MIPS. OMB asked that centers using IBM mainframe computers operate at 325 MIPS and centers using UNISYS operating systems operate at 225 MIPS. Further, OMB permitted agencies to justify not consolidating centers that fell below the target size if a particular center had a staff of less than five full-time employees, housed scientific processors and would otherwise be at least 90 percent of the minimal target size, or housed a large number of small and mid-tier processors and would otherwise be at least 90 percent of the minimal target size. OMB’s guidance is in keeping with recent congressional initiatives that focus on strengthening the planning and management of information technology efforts. In implementing the Paperwork Reduction Act and the Clinger-Cohen Act, OMB requires that information technology investments support core/priority mission functions and that they be undertaken by the requesting agency because no alternative private-sector or governmental source can efficiently support the function. These laws and OMB guidance also require agencies to establish an enterprisewide investment approach to information technology that includes selecting, controlling, and evaluating investments as part of an integrated set of management practices designed to link investments to organizational goals and objectives. Further, the National Defense Authorization Act for Fiscal Year 1997 requires the Secretary of Defense to report the Department’s plan for establishing an integrated framework for management of information resources within the Department by March 1, 1997. OMB’s guidance is also in keeping with the approach private-sector companies have taken in successfully consolidating and modernizing their own computer processing centers. We analyzed successful consolidation and modernization efforts carried out by three corporations and learned that they believed it was necessary to implement their strategies from a corporatewide perspective, rather than have separate components of their companies consolidate and modernize their own centers. These companies also ensured that from the outset of their consolidation efforts, they had clear and consistent policies and procedures governing how computer center services would be improved. This guidance spelled out such things as what constitutes an optimum computing center in terms of capacity and staff, what skills were needed to operate the centers, what cost and performance goals were relevant for the centers, and which services could be outsourced. In setting capacity goals, the private-sector companies we visited also generally attempt to reach targets that are substantially higher than the ones set by OMB—from 1,000 to 3,500 MIPS. In addition, we learned that private-sector companies we visited during this review established strong oversight processes for ensuring that their computer center decisions were based on accurate, complete, and current information on cost, schedules, benefits, and risks; that all valid options for their computer center services were fully addressed; and that their current services were correctly benchmarked against comparable services. In 1996, Defense reported to OMB that it owned 155 computer centers that perform a variety of information processing related services for the services and components. Among other things, the centers run software programs developed by the military services and various Defense components and provide information security services, customer help desk services, and records management services. Sixteen of these centers are central processing facilities known as megacenters and are owned by DISA. The remaining 139 centers are service- or component-unique centers. DOD also reported that it was continuing to further optimize and standardize its computer centers operations as part of departmentwide and intra-agency consolidations that had started in 1990 and continue today. Defense has recognized that these computer centers have been operating inefficiently and that they need to adopt new technologies and address the increasing loss of in-house technical expertise in order to continue supporting the Department’s large and complex information infrastructure. We agree with DOD that there are still many opportunities for savings. In fact, table 2 shows that many of these reported centers operate below the minimum processing capacity targets established by OMB for government-owned computer centers and thus are good candidates for consolidation or outsourcing. As table 2 indicates, 62 of DOD’s 155 computer centers—about 40 percent—met OMB criteria for possible consolidation based on processing capacity targets. However, we believe that these numbers could be higher. As indicated in the table notes, many small or mid-tier centers were not considered as candidates for consolidation. According to OMB officials responsible for implementing the Bulletin, these centers should have been included unless otherwise exempted. Further, private-sector and government-sector studies have found that larger facilities allow organizations to economize on floor space, staff, and operating expenditures, and smaller centers tend to be cost-inefficient. Before OMB issued its computer center Bulletin 96-02, DOD had determined, based on industry practices, that consolidation would ’’position DOD to more effectively support common data processing requirements across services by leveraging information technology and resource investments to meet multiple needs.’’ Since 1990, the Department has initiated and completed multiple intra-agency consolidations. In 1993, the megacenters were established as a result of (1) DOD’s base closures and (2) other consolidation and cost reduction efforts. In establishing these centers, DOD expected to change its information processing environment from one that was stovepiped, or confined to individual military services and components, to one that supported information sharing DOD-wide. Accordingly, since 1990, DOD consolidated its computer center operations by moving the workload and equipment from 194 DOD computer centers into 16 DISA megacenters by fiscal year 1996, reporting a reduction in processing costs of over $500 million. After these consolidations, DOD initiated several studies that looked into the question of whether the remaining megacenters should be further consolidated, modernized, or outsourced. One study—done by the Defense Science Board on the question of outsourcing DOD functions in general—reported in August 1996 that processing at computer centers was more expensive than at private-sector computer centers and it recommended that DOD computer center services be outsourced. A second study—done by a private contractor on the question of outsourcing, modernizing, and consolidating DISA’s megacenters for the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence (C3I) in 1996—concluded that further consolidation and outsourcing of megacenter operations was feasible. The contractor reported that the megacenters’ life cycle (10 years) cost could be cut by more than a billion dollars if the megacenters were consolidated to 6 from 16 and if certain computer center services—such as the customer help desk and those services associated with day-to-day operation of the centers—were fully outsourced. The Undersecretary of Defense, Comptroller, was also directed to submit a report on the feasibility of outsourcing DOD’s megacenters to the House Appropriations Committee by January 1, 1996. In this report, which was submitted to the Congress on December 26, 1996, the Comptroller largely agreed with the recommendations made by the contractor study described above and supported DISA’s proposed management plan to implement those recommendations. Some of our concerns with this plan, which is DISA’s consolidation strategy, are discussed in more detail in the next section of this report. In addition, as noted in the beginning of this report, we will be reporting separately on our detailed review of DISA’s plans. While DOD and its components have made progress in consolidating and finding opportunities to optimize and outsource many of the functions of its computer center operations, DOD is still missing opportunities to achieve even greater savings under its current approach. The Defense leadership has chosen to allow the individual military services and components to carry out their computer center consolidation and modernization efforts independent of any departmentwide framework. In fact, the Assistant Secretary of Defense for C3I, as part of his guidance when forwarding the OMB Bulletin, stated “that each Service and Defense Agency has the flexibility to reduce its data centers in a manner that is consistent with the DOD Component’s goals, management philosophy, and environment as long as such reductions occur within the framework of the OMB guidelines.” This decision has resulted in inconsistent and contradictory strategies which fall short of meeting OMB’s requirements and what we believe to be the intent of OMB’s Bulletin. We also learned that some of the inconsistency and incompleteness of reported plans and strategies was caused, in part, by DOD’s broad and inconsistent interpretations of the OMB Bulletin. Appendix IV provides a detailed analysis of how the military services and components responded to OMB’s Bulletin. The following discussion highlights our findings. As shown in the two tables that follow, the computer center consolidation plans of the individual military services and components submitted to OMB to date vary widely. For example, the Air Force, the Army, three Navy commands, and three Defense components plan to further consolidate in-house, while other parts of the Army and Navy, as well as the Defense Investigative Service, are choosing to keep their computer center operations in-house without further consolidation. Many of the strategies reflect a move toward mid-tier solutions without considering the potential for consolidation. Only the National Imagery and Mapping Agency and parts of the Navy chose to outsource (to DISA) their center operations. Further, just two services—the Air Force and the Army—considered inter-service consolidation of their respective computer centers within the Pentagon, and this action was already underway prior to the OMB Bulletin. Table 3 describes the approaches the services and components have decided on. Table 4 compares the strategies. We found that some of these strategies had contradictions that might well have been prevented had Defense better coordinated its computer center efforts. For example, as table 3 notes, the Department’s primary information processing service provider, DISA, intends to modernize and consolidate its megacenters and begin to offer mid-tier processing services to attract additional business from the services and components. DISA believes that significant reductions in the cost of operations could be achieved and that much of DOD’s computer processing is well suited for consolidation to DISA’s computer center operations. However, it is clear from the strategies described above that most of the services and agencies are not considering sending additional business to DISA, and DISA has no authority to require the services and components to make such transfers. The Army, the Air Force, and the Defense Logistics Agency, for example, do not plan to increase their use of DISA services. Together, about $385,000 of the reported $915,500 spent on computer center operations is outside of DISA. Most of the consolidation strategies submitted by the military services and components to OMB failed to fully address all of the planning elements addressed in the Bulletin. For example, some services and components did not provide sufficient information to show that they had (1) performed thorough analyses of their planned options, (2) demonstrated that they had ensured that they have the correct technical solutions to their computer center operations, (3) prepared even a high-level implementation approach to the major tasks associated with consolidation, or (4) provided estimates on how much it will cost to consolidate and modernize. Therefore, DOD and OMB do not have assurance that the services and components are addressing these critical planning elements in carrying out their strategies or that the approaches they have chosen are sound. Table 5 summarizes how the individual services and components responded to the OMB Bulletin. (“N” meaning they didn’t respond, “Y” meaning they did respond, “P” meaning they partially responded, and “N/A” meaning not applicable.) The table illustrates that the Defense Intelligence Agency, Defense Investigative Service, and Defense Special Weapons Agency were the only DOD components in compliance with all of the OMB requirements. The Army was the only other component to have submitted a complete alternatives analysis for its computer centers. The table also shows that the Air Force, DISA, the Defense Commissary Agency, and the Defense Logistics Agency either partially addressed or did not address the requirements. As noted earlier, a more detailed analysis of the responses is provided in appendix IV. As reflected in table 5, we determined that the DOD submissions did not always comply with the OMB requirements. When we discussed this with DOD officials, they said that their submissions did not always describe their consolidation plans for their non-mainframe computer centers because some of the components believed that the guidance only applied to mainframes and others believed that the guidance did not apply to actions already underway and approved through DOD’s life cycle management process. These officials had interpreted the Bulletin as requiring that non-mainframe computer centers be included in the inventory but not in the consolidation strategies, unless they affected the center’s meeting the minimum target size. When we discussed this with OMB officials, they disagreed with DOD’s interpretation. They stated that the Department’s non-mainframe centers also should have been addressed in both the inventories and the consolidation strategies. We also asked OMB officials why they required submissions from each of the military departments and one from DOD. OMB officials told us that they required four separate submissions based on their interpretation of the Paperwork Reduction Act of 1995, which defined DOD as the Department of Defense and the three services. However, they further stated they would have preferred to receive from DOD a departmentwide inventory, consolidation strategy, and implementation plan that clearly reflected a departmentwide analysis and direction for DOD decisions on computer centers. We believe such a departmentwide approach is consistent with the intent of the Bulletin and the Clinger-Cohen Act to ensure that opportunities to consolidate centers among services and components were maximized. Instead, these officials stated that OMB received separate and conflicting responses that failed to provide a clear view of consolidation across components. OMB officials further stated they had difficulty determining how many centers DOD currently had and planned to have after the consolidations. When we discussed the multiple submissions from DOD with Defense officials, the Assistant Secretary of Defense for C3I acknowledged that the military services and DOD components had developed individual plans. However, he believed that separate plans were allowed by the OMB Bulletin and that OMB did not request a departmentwide strategy or plans. However, the Assistant Secretary agreed that the Department needs a departmentwide policy guidance and framework as DOD seeks additional opportunities for economies and efficiencies in its data center operations. The Assistant Secretary also agrees that future decisions should be based on sound business analyses and that the Clinger-Cohen Act provides a context and leverage for these guidelines. Although DOD has been consolidating its computer centers since 1990, we found, and the Assistant Secretary of Defense for C3I agreed, that DOD lacks several decision-making tools that are imperative to any computer consolidation and modernization effort. First, it has not set targets or established policy for basic things, such as how many computer centers the Department actually needs, the numbers and skill mix of staff that are required to operate the centers, and what constitutes an optimum computer center. It also has no mechanism for ensuring that the best money-saving opportunities have been considered by the individual services and components or that consolidation efforts will conform to federal requirements or even the needs of the Department as a whole. As discussed earlier in this report, private companies we visited during our review found that setting such targets—through policies and procedures—and oversight mechanisms were key to the success of their consolidation efforts. Without them, DOD will have difficulty identifying problematic strategies and preventing some of its computer center investments from being wasteful. The private companies we visited during our review found it necessary to direct their computer center consolidation efforts from a corporatewide perspective and to clearly delineate the makeup and number of centers that the companies were aiming for. More specifically, these companies established policies that defined what constituted an optimum computer center in terms of processing capacity and the numbers and skill mix of its staff; how many centers the corporation needed; what computer center functions were so critical to carrying out the company’s mission that they could not be outsourced; what cost and performance goals were relevant for the centers; and how the centers should be compared, or benchmarked, to more successful operations. They also established corporatewide procedures for implementing these policies. During our review, we also learned that DOD visited private companies, including the ones we visited. DOD officials benchmarked this industry experience to determine how best to prepare, justify, and implement prior departmentwide efforts to consolidate and standardize computer centers from 1990 to 1994. For example, DOD learned examples of private-sector criteria that could be used to select megacenters and the level of processing capacity and expandable floor space these centers should have. However, it did not use the lessons learned from these visits to prepare departmentwide policies and procedures. As a result, individual services and components do not have a consistent basis for determining what constitutes an optimum center; what their performance or staffing targets should be; or which functions are inherently governmental or can be outsourced. For example, these services and components do not have departmentwide targets that they can set as goals for the processing capacities of their mainframe or mid-tier centers. In a March 1996 report on DOD’s acquisition of computer centers, DOD’s inspector general specifically noted that Defense lacked complete “policies and procedures on acquiring and managing the proper mix of mainframe and mid-tier computers to process corporate data” and that without such policies and procedures—especially those for mid-tier processors—DOD’s potential for acquiring excess computer processing capabilities increases. The Inspector General also noted that if DOD would coordinate its processing needs it could, among other things, (1) take advantage of the open systems infrastructure concept to resolve operational problems, (2) better track and report information management costs on a DOD-wide basis, (3) better manage the transition from existing outdated systems to migration systems, and (4) improve management of computer security. DOD agreed with the Office of the Inspector General that it should establish procedures for evaluating and providing corporate information processing and storage requirements on a DOD-wide basis rather than on an individual program basis. However, DOD noted that it should proceed with care in implementing this recommendation because of its implications for centralized management and control. According to officials in the office of the Assistant Secretary for C3I, DOD plans to determine if, and to what extent, it has a mid-tier computing problem before issuing policies and procedures to address that problem. Under the Paperwork Reduction Act of 1995 and the Clinger-Cohen Act, passed in 1996, the Assistant Secretary of Defense for Command, Control, Communications and Intelligence, as DOD’s Chief Information Officer (CIO), is supposed to develop and implement management policy and procedures to ensure that major information technology related efforts conform to departmentwide goals. In a memorandum dated November 6, 1995, the Assistant Secretary expressed an intent to monitor the consolidation initiatives to (1) ensure consistent interpretation and implementation of OMB Bulletin 96-02 across the Department, (2) ensure that consolidation efforts are consistent with the DISA plans, and (3) identify issues and develop strategies for resolving them quickly. Accordingly, the Assistant Secretary set up an advisory group to provide policy guidance for the Department’s efforts to consolidate and outsource computer center operations. However, this group has not yet prepared this critical guidance nor has it been effective in achieving its stated monitoring objectives. Under the Clinger-Cohen Act, the Secretary of Defense, with the advice and assistance of the CIO, is responsible for establishing a mechanism for ensuring that the military services and components have considered the best investment options and consolidation efforts that will meet the needs of the Department as a whole. In its guidance to agencies on evaluating information technology investments, OMB suggests that such a mechanism take the form of an investment review board, or senior management team, that would review information technology funding decisions. In their decision-making process, the team would consider such things as strategic improvements versus maintenance of current operations, new projects versus ongoing projects, risks, opportunity costs, and budget constraints. The Assistant Secretary C3I also charged the advisory group discussed above with the responsibility for providing oversight for computer center consolidation efforts. Yet, to date, neither the advisory group nor any other DOD component has provided this oversight. The Assistant Secretary C3I believes that his authority as DOD’s chief information officer for providing such oversight has been strengthened by the Clinger-Cohen Act. However, he also believes that his office lacks the staff and departmentwide support to establish such oversight. Without this important oversight mechanism, DOD does not have a means for assessing whether the individual services and components considered the cost-effectiveness and technical feasibility of their computer center alternatives from a departmentwide perspective and whether their implementation approaches, schedules, and funding plans are realistic. This also precludes Defense from having an opportunity to review the consistency of the individual plans and identify and recommend areas where even more monetary and efficiency gains could be achieved through inter-service and component efforts. Without better coordination and oversight of computer center consolidation efforts, the best Defense can hope to achieve from its computer center consolidations is optimization at or below the component level. It will certainly miss out on the chance to ensure that the most investment worthy opportunities are identified and implemented, such as those that involve services and components merging their computer centers. Moreover, millions of dollars in computer center investments and operating expenses may well end up being wasted since individual components and services are planning without departmentwide information processing needs in mind and without the benefit of clearly defined organizationwide policies and procedures for the consolidation efforts and effective oversight mechanisms. Having centralized coordination for computer center optimization efforts and strong policies, procedures, and oversight were integral to the success of the corporations we visited in their efforts to consolidate computer centers. They should be for Defense as well. We recommend that the Secretary of Defense direct the Department’s Chief Information Officer to develop an integrated, departmentwide plan for improving the cost and operations of its computer centers. Until this plan is approved by the Secretary, we further recommend that the Secretary of Defense limit any capital investments in the Department computer centers to investments that meet critical technology needs to operate the DOD computer centers. The Department’s CIO should certify that these investments comply with departmentwide goals and technical standards. We also recommend that as a basis for this plan and for future decisions concerning consolidation, modernization, and outsourcing of computer centers, Defense’s Chief Information Officer develop policies and related procedures that address the following: (1) what constitutes an optimum computer center in terms of processing capacity and staff numbers and skills; (2) how many computer centers are needed; (3) which of its computer center operations are inherently governmental and/or require component-unique centers solutions and thus cannot be consolidated or outsourced; (4) how DOD should compare its computer center services with those of other public-sector and private-sector services in terms of cost, speed, productivity, and quality of outputs and outcomes; and (5) which cost and performance goals are relevant for comparing departmentwide alternatives. We also recommend that Defense’s Chief Information Officer establish or incorporate within its existing processes, as practical, the necessary oversight to ensure that the above recommended departmentwide plan and future computer center consolidation, modernization, and outsourcing decisions (1) are being developed in accordance with the above policies and procedures, (2) are based on a sound analysis of alternatives, and (3) consider the goals and needs of the entire department. Finally, we recommend that the Director of the Office of Management and Budget (1) clarify its Bulletin, particularly in regard to mid-tier consolidation criteria and its intent to have an integrated Department of Defense submission and (2) require the Department of Defense to replace its prior multiple submissions in response to this new guidance with an integrated departmentwide submission that contains a departmentwide inventory of computer centers, a departmentwide consolidation strategy, and a departmentwide implementation plan. The Department of Defense provided written comments on a draft of this report. OMB provided us with oral comments. DOD concurred with our recommedation on providing oversight over its computer center efforts and partially concurred with our recommendation to develop policies and procedures to guide computer center decisions. However, DOD did not concur with our recommendation to limit any capital investments in the Department’s computer centers until an integrated, departmentwide consolidation plan is prepared. Defense’s response to this report is summarized below, along with our evaluation. Appendix II contains Defense’s comments along with our more detailed evaluation. DOD agreed that it needs to develop a prudent framework for achieving potential savings through its future computer center consolidation, modernization, and outsourcing decisions. DOD added that it is developing such a framework as part of its effort to implement the Clinger-Cohen Act. DOD also questioned the need for an integrated consolidation and outsourcing plan since the Department has already consolidated many of its computer centers, with significant reported savings, without such a plan. However, during our review, DOD officials acknowledged that unlike prior consolidation efforts, DOD has allowed the components considerable flexibility in their current consolidation efforts, without strategic direction from the Department. Thus, we continue to believe that an integrated, departmentwide plan is needed to show that the Department’s computer center decisions reflect sound choices for meeting departmentwide processing needs and not just those of the individual components. In discussing our recommendation on developing policies and procedures for making consolidation and outsourcing decisions, DOD agreed that these are necessary. However, DOD believed that it should complete its development of an integrated management framework for implementing the Clinger-Cohen Act before developing the specific policies and procedures we recommended. We are encouraged by the Department’s effort to begin to develop a management framework for implementing the Clinger-Cohen Act, especially if it includes the policies and procedures we recommend in this report. The report detailing DOD’s plans for this framework was submitted to the Congress on March 14, 1997. Consequently, if DOD intends to include these policies and procedures in the framework, we believe it should limit making computer center decisions and investments to those that meet critical technology needs to operate the centers until the framework is finalized. In commenting orally on this report, OMB stated that it believed our report overemphasized the importance of consolidating mid-tier processors within the context of OMB Bulletin 96-02. We disagree; we continue to believe that the consolidation strategy needs to include mid-tier processors as they are a vital component of the services offered by the computer centers. We are sending copies of this report to the Ranking Minority Member of your Committee and the Chairmen and Ranking Minority Members of the House and Senate Committees on Appropriations, the Senate Committee on Governmental Affairs, the House and Senate Committees on the Budget, the Senate Committee on Armed Services, and the House Committee on Government Reform and Oversight. Also, we are sending copies to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Department of Defense Chief Information Officer; the Director of the Defense Information Systems Agency; the Director of the Defense Logistics Agency; the Director of DISA’s Westhem Command; the Director of Office of Management and Budget; and other interested parties. Copies will be made available to others upon request. If you have any questions about this report, please call me at (202) 512-6240 or Mickey McDermott, Assistant Director, at (202) 512-6219. Other major contributors to this report are listed in appendix IV. To assess whether DOD has an effective framework in place for making and executing its computer center decisions, we interviewed staff and obtained documentation from the following federal activities: the Office of Management and Budget’s Office of Information Policy and Technology Branch, which has responsibility for overseeing agency implementation of OMB Bulletin 96-02; the General Service Administration’s Office of Governmentwide Policy and Federal Systems Management Center, which provided documentation on matters federal agencies should consider when making consolidation, optimization, or outsourcing decisions; the Office of the Assistant Secretary of Defense, Command, Control, Communications, and Intelligence, which is the office of DOD’s Chief Information Officer; various offices of the Defense Information Systems Agency, primarily in Arlington, Virginia; and Army, Navy, and Air Force staffs and offices in Arlington, Virginia, with responsibility for making decisions on consolidating, optimizing, or outsourcing their computer center operations. We also met with managers from corporations that had successfully consolidated, modernized, and outsourced their computer centers. We identified these corporations through discussions with private-sector consultants and Defense computer center officials. The corporations contacted were Boeing Computing Service, Belleview, Washington; Electronic Data Systems, Plano, Texas; and GTE Corporation, Fairfax, Virginia. Through these interviews and related documentation, we analyzed how these companies strategically direct and oversee their decisions on alternatives and how they determine the cost and measure the performance of their computer center operations. We also met with consultants who advise computer center managers on improving their services. The consultants contacted were the Center for Naval Analyses, Compass America, Inc., Coopers and Lybrand, and the Gartner Group. In these discussions, we identified best practices and important performance measures that they believe well managed computer centers should use to benchmark their performance with other computer centers. In addition, we interviewed senior officials at the Defense Science Board to discuss the Board’s high-level study done for DOD management on the outsourcing of select DOD activities, including its computer centers. Finally, we met with DOD officials in the Office of the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence to discuss their actions to implement a departmentwide decision-making framework for making computer center investment decisions. To assess the effectiveness of DOD’s framework, we compared the framework with best practices used by leading organizations and the Clinger-Cohen Act. Also, through this office, we obtained and analyzed DOD’s submissions to OMB in compliance with OMB Bulletin 96-02 to determine whether these submissions met OMB’s requirements and had been prepared to meet departmentwide information processing needs. We did not validate the accuracy of the numbers provided by DOD on its computer centers. Our work was performed from March 1996 through January 1997 in accordance with generally accepted government auditing standards. We performed our work primarily at the office of the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence and at DISA headquarters offices in Arlington, Virginia. The following are GAO’s comments on the Department of Defense’s letter dated March 10, 1997. 1.We acknowledge that DOD has reported significant savings through its prior consolidation efforts and have expanded the report to reflect the fact that DOD has consolidated 194 DOD computer centers into 16 DISA megacenters, at a reported reduction in processing costs of over $500 million. (See section entitled, DOD Recognizes Benefits of Further Consolidating, Modernizing and Outsourcing Computer Centers.) As appropriate, we also expanded the report to acknowledge DOD’s use of industry practices to help make these reductions. (See section entitled, DOD Lacks Critical Decision-making Tools for Consolidation Efforts.) 2.We agree that DOD needs to determine whether further economies and efficiencies are possible and, if so, what strategies should be employed to reap these savings. The recommendations to DOD and OMB contained in this report are intended to facilitate and guide these determinations. 3.The report fully describes the differing views of DOD and OMB officials for interpreting OMB Bulletin 96-02 in two broad areas: (1) DOD’s consolidation plans for its non-mainframe small and mid-tier computer centers and (2) the number of DOD plan submissions required by OMB. In the report, we pointed out that OMB officials did not agree with DOD’s interpretation that non-mainframe computer centers should only be included in their inventories but not their consolidation strategies. OMB’s position, which we support, is that non-mainframe centers should have been described in DOD’s inventories and consolidation strategies, as the purpose of OMB Bulletin 96-02 is to look for ways to consolidate all DOD’s computer centers, not just its mainframe computer centers. We made our recommendation that OMB clarify the Bulletin with regard to its mid-tier consolidation criteria in order to preclude any future confusion. We further recommended that OMB clarify in the Bulletin that while DOD has previously been permitted to provide separate submissions for the three services and for DOD, it should be required to provide a single, integrated submission for the entire Department. 4.We provided and discussed an earlier draft of this report with DOD officials and have incorporated their comments as appropriate to improve the accuracy of the report. The reference in DOD’s letter to our handling of the Army’s centers in table 2 refers to wording that was provided by the Army. However, note b to table 2 has been expanded to reflect Army’s views that some of its centers provide unique missions (for example, command and control, and National Guard). 5.We did not include DOD’s second enclosure in appendix II because it is an annotated copy of this report. This enclosure contained a few technical comments, which we have incorporated into the final report. This appendix provides information on the numbers of mainframes and mid-tier and small processors owned by the services and components. National Imagery and Mapping Agency The Defense Commissary Agency does not have any mainframes. The tables that follow summarize our analysis of the extent to which DOD services and components complied with the planning elements called for by OMB. Alternatives analysis reflecting the technical feasibility and cost-effectiveness of alternatives—including outsourcing. DISA did not submit an alternatives analysis to OMB. However, during our review, we found that DISA had analyzed the costs and benefits associated with (1) outsourcing megacenter services and (2) consolidating 16 megacenters into 6 centers. Architecture design, or technical solution, based on selected data center consolidation alternative, and identifying the receiving and closing data centers and workload realignment as well as the communications architecture. Technical architecture submitted to OMB was not based on an approved alternative analysis, nor did it identify receiving and closing data centers. Architecture did not address workload realignment or communications architecture. High-level implementation approach identifying major consolidation tasks and presenting a schedule, milestones, and resources. No implementation approach submitted to OMB. Funding plan identifying and forecasting costs associated with the consolidation process and funding requirements for all major tasks associated with the consolidation. No funding plan submitted to OMB. Exceptions that could not be included in the consolidation plan. No exceptions identified to OMB. Alternatives analysis reflecting the technical feasibility and cost-effectiveness of alternatives—including outsourcing. The Air Force did not submit an alternatives analysis to OMB; however, it did describe plans to move towards a mid-tier architecture for some of its centers and to outsource those centers that cannot be moved to mid-tiers. Architecture design, or technical solution, based on selected data center consolidation alternative, and identifying the receiving and closing data centers and workload realignment as well as the communications architecture. The Air Force did not submit an architectural design to OMB, only individual computer center approaches to consolidation. High-level implementation approach identifying major consolidation tasks and presenting a schedule, milestones, and resources. A partial implementation approach was submitted to OMB in that schedules and milestones were provided. Funding plan identifying and forecasting costs associated with the consolidation process and funding requirements for all major tasks associated with the consolidation. No funding plan submitted to OMB, although some data were provided on estimated savings from consolidation. Exceptions that could not be included in the consolidation plan. Exceptions were requested from OMB for the following reasons: 4 centers performed applications programming, 12 centers operated national security systems, 1 center met OMB’s exception criteria by having less than five full-time employees, and 2 centers were transitioning to the Air Force Working Capital Fund. Alternatives analysis reflecting the technical feasibility and cost-effectiveness of alternatives—including outsourcing. Alternatives analyses were submitted to OMB for the computer centers Army considered candidates for consolidation or outsourcing: the separate Single Agency Manager computer centers operated by the Air Force and the Army and two Army personnel computer centers. Outsourcing to DISA and leasing were among the alternatives considered. Architecture design, or technical solution, based on selected data center consolidation alternative, and identifying the receiving and closing data centers and workload realignment as well as the communications architecture. Architectural designs were submitted for the Single Agency Manager and for the two personnel computer centers. High-level implementation approach identifying major consolidation tasks and presenting a schedule, milestones, and resources. A high-level implementation approach was submitted to OMB. Resources, but not schedules and milestones, were submitted for major consolidation tasks. Funding plan identifying and forecasting costs associated with the consolidation process and funding requirements for all major tasks associated with the consolidation. A funding plan was not submitted, but Army plans to chargeback costs to customers. Exceptions that could not be included in the consolidation plan. Exceptions identified for areas such as National Guard, civil works, intelligence, command and control, research and development, and wargaming. Centers not analyzed for consolidation were reported as centers that support networks or systems in a “distributed environment.” Alternatives analysis reflecting the technical feasibility and cost-effectiveness of alternatives—including outsourcing. Each of 10 Navy commands reported a consolidation strategy for its centers that Navy believed met OMB’s criteria for consolidation. Two of these commands, the Bureau of Naval Personnel and the Naval Supply Systems Command, reported that DISA megacenters already processed their information. The Naval Air Systems Command and the Navy Facilities Engineering Command also described plans for DISA to process their information. Two other commands, the Naval Sea Systems Command and the Bureau of Medicine and Surgery, provided alternative analyses supporting their decision to continue to process their information in-house. The remaining four commands did not provide alternative analyses to OMB. Architecture design, or technical solution, based on selected data center consolidation alternative, and identifying the receiving and closing data centers and workload realignment as well as the communications architecture. Architectural designs were submitted but were not based on alternatives analyses for four commands. High-level implementation approach identifying major consolidation tasks and presenting a schedule, milestones, and resources. Completion dates were provided, but schedules of major consolidation tasks or resource needs were not provided. Funding plan identifying and forecasting costs associated with the consolidation process and funding requirements for all major tasks associated with the consolidation. A funding plan was not provided to OMB, but Navy plans to fund its modernization efforts through its information technology budget. Exceptions that could not be included in the consolidation plan. Not applicable. Alternatives analysis reflecting the technical feasibility and cost-effectiveness of alternatives—including outsourcing. The agency has decided to move to a client/server architecture. No consolidation strategy was submitted for this move. Architecture design, or technical solution, based on selected data center consolidation alternative, and identifying the receiving and closing data centers and workload realignment as well as the communications architecture. None submitted. High-level implementation approach identifying major consolidation tasks and presenting a schedule, milestones, and resources. None submitted. Funding plan identifying and forecasting costs associated with the consolidation process and funding requirements for all major tasks associated with the consolidation. None submitted. Exceptions that could not be included in the consolidation plan. No exemptions requested. Alternatives analysis reflecting the technical feasibility and cost-effectiveness of alternatives—including outsourcing. The agency reported that it met the OMB target MIPS for a minimum size computer center at both of its computer centers. Architecture design, or technical solution, based on selected data center consolidation alternative, and identifying the receiving and closing data centers and workload realignment as well as the communications architecture. See above. High-level implementation approach identifying major consolidation tasks and presenting a schedule, milestones, and resources. See above. Funding plan identifying and forecasting costs associated with the consolidation process and funding requirements for all major tasks associated with the consolidation. See above. Exceptions that could not be included in the consolidation plan. See above. Alternatives analysis reflecting the technical feasibility and cost-effectiveness of alternatives—including outsourcing. Architecture design, or technical solution, based on selected data center consolidation alternative, and identifying the receiving and closing data centers and workload realignment as well as the communications architecture. Not applicable. High-level implementation approach identifying major consolidation tasks and presenting a schedule, milestones, and resources. Not applicable. Funding plan identifying and forecasting costs associated with the consolidation process and funding requirements for all major tasks associated with the consolidation. Not applicable. Exceptions that could not be included in the consolidation plan. Requested exemption because the Service’s computer center meets the minimum target size for computer centers. Alternatives analysis reflecting the technical feasibility and cost-effectiveness of alternatives—including outsourcing. Alternatives analyses were submitted for 3 of the agency’s 12 computer centers, with selection of option to consolidate in-house operations. Two of the agency’s computer centers were exempt (see below) and another two were designated for BRAC. Alternative analyses were not submitted for the remaining 5 computer centers, which will support the agency’s planned Distribution Standard System. Architecture design, or technical solution, based on selected data center consolidation alternative, and identifying the receiving and closing data centers and workload realignment as well as the communications architecture. Not submitted for centers that will support the Distribution Standard System because the agency believed these requirements were not applicable to its overall strategy. High-level implementation approach identifying major consolidation tasks and presenting a schedule, milestones, and resources. Not submitted for centers that will support the Distribution Standard System because the agency believed these requirements were not applicable to its overall strategy. Funding plan identifying and forecasting costs associated with the consolidation process and funding requirements for all major tasks associated with the consolidation. Not submitted for centers that will support the Distribution Standard System because the agency believed these requirements were not applicable to its overall strategy. Exceptions that could not be included in the consolidation plan. Two of the agency’s computer centers meet OMB’s target size for computer centers. Alternatives analysis reflecting the technical feasibility and cost-effectiveness of alternatives—including outsourcing. Alternatives analyses were not submitted, but the agency plans to transition processing from remaining agency computer centers to DISA megacenters by the middle of fiscal year 1998. Architecture design, or technical solution, based on selected data center consolidation alternative, and identifying the receiving and closing data centers and workload realignment as well as the communications architecture. See above. High-level implementation approach identifying major consolidation tasks and presenting a schedule, milestones, and resources. Broad milestones for transition were provided, but no additional information. Funding plan identifying and forecasting costs associated with the consolidation process and funding requirements for all major tasks associated with the consolidation. A funding plan was not provided to OMB. The agency plans to provide OMB with a funding plan if DISA can process its applications. Exceptions that could not be included in the consolidation plan. None requested. Alternatives analysis reflecting the technical feasibility and cost-effectiveness of alternatives—including outsourcing. Agency conducted an alternatives analysis to support the consolidation of its information processing in-house. Alternatives such as commercial outsourcing were not considered because of the agency’s command, mission, and security considerations. Architecture design, or technical solution, based on selected data center consolidation alternative, and identifying the receiving and closing data centers and workload realignment as well as the communications architecture. A complete architectural design was provided. High-level implementation approach identifying major consolidation tasks and presenting a schedule, milestones, and resources. A complete high-level response was provided. Funding plan identifying and forecasting costs associated with the consolidation process and funding requirements for all major tasks associated with the consolidation. A complete funding plan was provided. Exceptions that could not be included in the consolidation plan. None requested. Karl G. Neybert, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed: (1) the Department of Defense's (DOD) plans to consolidate, outsource, and modernize its computer center operations; and (2) whether DOD has an effective framework in place for making and executing these decisions. GAO did not validate the accuracy of the information provided by DOD on the numbers and costs of computer centers, the alternative analyses, funding plans, and processing capacities. GAO noted that: (1) DOD has recognized the need to continue reductions in the cost of its computer centers' operations through consolidation, modernization, and outsourcing, but it has not yet established an effective framework for making these decisions; (2) this framework would include departmentwide policies and procedures critical to the success of its efforts to improve computer centers; (3) these policies and procedures would establish targets for how many computer centers DOD actually needs, define how mainframes and mid-tier computer operations should be consolidated, and identify the numbers and skill mix of staff that are required to operate the centers, and what constitutes an optimum computer center; (4) DOD also has no mechanism for ensuring that the best money-saving opportunities have been considered by the individual services and components or that consolidation efforts will conform to federal requirements or even meet the needs of DOD as a whole; (5) as a result, DOD services and components have developed individual strategies for consolidating and modernizing their computer centers that are inconsistent and contradictory to DOD as a whole and may well cause DOD to waste millions of dollars in computer center expenditures; (6) in addition, the consolidation strategies of the military services and DOD components did not always fully address critical planning elements required by the Office of Management and Budget (OMB) requirements that could help reduce the risk of waste, including alternative analyses, high-level implementation plans, and funding plans; (7) further, GAO found that the OMB and departmental guidance, particularly addressing mid-tier computer centers, was unclear; (8) this resulted in inconsistent interpretation and reporting for these centers; (9) therefore, OMB and DOD do not have assurance that the computer consolidation strategies are sound; (10) without better management over the implementation of its computer center strategies, DOD at best will only achieve optimization at the component level and forgo optimization for DOD as a whole; and (11) moreover, DOD's chief information officer (CIO) is now required by the Paperwork Reduction Act of 1995, Clinger-Cohen Act of 1996, and the Fiscal Year 1997 DOD Authorization Act to develop and implement a plan for a management framework with policies and procedures as well as effective oversight mechanisms for ensuring that major technology related efforts, such as the computer center consolidations, conform to departmentwide goals.
The Recovery Act was enacted to help preserve and create jobs and promote economic recovery, invest in technology to spur technological advances, and invest in infrastructure to provide long-term economic benefits, among other things. The act was a response to significant weakness in the economy; in February 2011, the Congressional Budget Office (CBO) estimated the net cost as $821 billion. Congress and the administration built into the Recovery Act numerous provisions to increase transparency and accountability, including requiring recipients of some funds to report quarterly on a number of measures. To implement these requirements, the Office of Management and Budget (OMB) worked with the newly established Recovery Board to deploy a nationwide system at www.federalreporting.gov (FederalReporting.gov) for collecting data submitted by the recipients of funds. OMB set the specific time line for recipients to submit reports and for agencies to review the data. Recipients are required to submit the reports in the month after the close of a quarter, and, by the end of the month, the data are to be reviewed by federal agencies for material omissions or significant reporting errors before being posted to the publicly accessible Recovery.gov. The Recovery Board’s goals for this Web site were to promote accountability by providing a platform to analyze Recovery Act data and serving as a means of tracking fraud, waste, and abuse allegations by providing the public with accurate, user-friendly information. The reporting requirements apply only to nonfederal recipients of funding, including all entities receiving Recovery Act funds directly from the federal government such as state and local governments, private companies, educational institutions, nonprofits, and other private organizations. OMB guidance, consistent with the statutory language in the Recovery Act, states that these reporting requirements apply to recipients who receive funding through the Recovery Act’s discretionary appropriations, not recipients receiving funds through entitlement programs, such as Medicaid, or tax programs. Individuals are also not required to report. Federal law does not prohibit a contractor with unpaid federal taxes from receiving contracts from the federal government. Currently, regulations calling for federal agencies to do business only with responsible contractors do not require contracting officers to consider a contractor’s tax delinquency unless the contractor was specifically debarred or suspended by a debarring official for specific actions, such as conviction for tax evasion. According to the Federal Acquisition Regulation (FAR), a responsible prospective contractor is a contractor that meets certain specific criteria, including having adequate financial resources and a satisfactory record of integrity and business ethics. However, the FAR does not currently require contracting officers to take into account a contractor’s tax debt when assessing whether a prospective contractor is responsible and does not currently require contracting officers to determine if federal contractors have unpaid federal taxes at the time a contract is awarded. Further, federal law generally prohibits the disclosure of taxpayer data to contracting officers. Thus, contracting officers do not have access to tax data directly from IRS unless the contractor provides consent. On May 22, 2008, the Civil Agency Acquisition Council and the Defense Acquisition Regulations Council amended the FAR by adding conditions regarding delinquent federal taxes and the violation of federal criminal tax laws. The FAR rule requires offerors on federal contracts to certify whether or not they have, within a 3-year period preceding the offer, been convicted of or had a civil judgment rendered against them for, among other things, violating federal criminal tax law, or been notified of any delinquent federal taxes greater than $3,000 for which the liability remains unsatisfied. This certification is made through the Online Representations and Certifications Application (ORCA) Web site, orca.bpn.gov. Neither federal law nor current governmentwide policies for administering federal grants or direct assistance prohibit applicants with unpaid federal taxes from receiving grants and direct assistance from the federal government. OMB Circulars provide only general guidance with regard to considering existing federal debt in awarding grants. Specifically, the Circulars state that if an applicant has a history of financial instability, or other special conditions, the federal agency may impose additional award requirements to protect the government’s interests. The Circulars require grant applicants to self-certify in their standard government application (SF 424) whether they are currently delinquent on any federal debt, including federal taxes. There is no requirement for federal agencies to take into account an applicant’s delinquent federal debt, including federal tax debt, when assessing applications. No assessment of tax debt is required by OMB on a sampling or risk-based assessment. To improve the collection of unpaid taxes, Congress, in the Taxpayer Relief Act of 1997, authorized IRS to collect delinquent tax debt by continuously levying (offsetting) up to 15 percent of certain federal payments made to tax debtors. The payments include federal employee retirement payments, certain Social Security payments, selected federal salaries, contractor, and other vendor payments. Subsequent legislation increased the maximum allowable levy amount to 100 percent for payments to federal contractors and other vendors for goods or services sold or leased to the federal government. The continuous levy program, now referred to as the Federal Payment Levy Program (FPLP), was implemented in 2000. Under the FPLP, each week IRS sends the Department of the Treasury’s Financial Management Service (FMS) an extract of its tax debt files. These files are uploaded into the Treasury Offset Program. FMS sends payment data to this offset program to be matched against unpaid federal taxes. If there is a match and IRS has updated the weekly data sent to the offset program to reflect that it has completed all statutory notifications, the federal payment owed to the debtor is reduced (levied) to help satisfy the unpaid federal taxes. In creating the weekly extracts of tax debt to forward to FMS for inclusion in the offset program, IRS uses the status and transaction codes in the master file database to determine which tax debts are to be included in or excluded from the FPLP. Cases may be excluded from the FPLP for statutory or policy reasons. Cases excluded from the FPLP for statutory reasons include tax debt that had not completed IRS’s notification process, or tax debtors who filed for bankruptcy protection or other litigation, who agreed to pay their tax debt through monthly installment payments, or who requested to pay less than the full amount owed through an offer in compromise. Cases excluded from the FPLP for policy reasons include those tax debtors whom IRS has determined to be in financial hardship, those filing an amended return, certain cases under criminal investigation, and those cases in which IRS has determined that the specific circumstances of the cases warrant excluding it from the FPLP. At least 3,700 recipients of Recovery Act contracts and grants are estimated to owe $757 million in known unpaid federal taxes as of September 30, 2009, though this amount is likely understated for reasons discussed below. This represented nearly 5 percent of the approximate 80,000 contract and grant recipients in the Recovery.gov data as of July ly 2010 that we reviewed. These approximately 3,700 recipients received over $24 billion through Recovery Act contracts and grants. As indicated in figure 1, corporate income taxes comprised $417 million, or about 55 percent, of the estimated $757 million of known unpaid federal taxes. Payroll taxes comprised $207 million, or about 27 percent, of the taxes owed by Recovery Act contract and grant recipients we reviewed. Unpaid payroll taxes included amounts that were withheld from employees’ wages for federal income taxes, Social Security, and Medicare but not remitted to IRS, as well as the matching employer contributions for Social Security and Medicare. The remaining $133 million was from other unpaid taxes, including excise and unemployment taxes. Employers are subject to civil and criminal penalties if they do not remit payroll taxes to the federal government. When an employer withholds taxes from an employee’s wages, the employer is deemed to have a responsibility to hold these amounts “in trust” for the federal government until the employer makes a federal tax deposit in that amount. When these withheld amounts are not forwarded to the federal government, the employer is liable for these amounts as well as the employer’s matching Federal Insurance Contribution Act contributions for Social Security and Medicare. Individuals within the business (e.g., corporate officers) m held personally liable for the withheld amounts not forwarded assessed a civil monetary penalty known as a trust fund recovery penalty (TFRP). Failure to remit payroll taxes can also be a criminal felony offense punishable by imprisonment of not more than 5 years, while the failure t o properly segregate payroll taxes can be a crim punishable by imprisonment of up to a year. A substantial amount of the estimated unpaid federal taxes shown in IR records owed by Recovery Act contract and grant recipients had been outstanding from several tax years. As reflected in figure 2, about 65 percent of the estimated $757 million in unpaid taxes were for tax periods from tax years 2003 through 2008, and about 35 percent of the estimated unpaid taxes were for tax periods prior to that. 26 U.S.C. § 6672. For the 15 cases of Recovery Act recipients with outstanding tax debt that we selected for a detailed audit and investigation, we found abusive or potential criminal activity related to the federal tax system. Specifically, the 15 recipients we investigated owed delinquent payroll taxes. As discussed previously, businesses and organizations with employees are required by law to collect, account for, and transfer income and employment taxes withheld from employees’ wages to IRS; failure to do so may result in civil or criminal penalties. These 15 recipients—8 contract and 7 grant recipients—received about $35 million in Recovery Act funds. The 15 case study recipients typically operate in industries, such as construction, engineering, security, and technical services. The amount of known unpaid taxes associated with these case studies is about $40 million, ranging from approximately $400,000 to over $9 million. IRS has taken collection or enforcement activities (e.g., filing of federal tax liens, assessment of a TFRP) against all 15 of these recipients. In addition, IRS records indicate that at least one of the entities is under criminal investigation. Table 1 highlights the 15 recipients with known unpaid taxes. We have referred all 15 recipients to IRS for criminal investigation, if warranted. Our analysis and investigation found that only 1 of these 15 Recovery Act recipients was subject to the new FAR requirement for certification of tax debts in relation to their Recovery Act awards. Because that contractor was current on its repayment agreement, the contractor was not required to disclose its tax debts. The other 14 recipients were grant recipients or contract subrecipients. However, 1 of the 14 companies that recently filed an Online Representations and Certifications Application (ORCA) improperly stated that the company had not been notified of any delinquent federal taxes (greater than $3,000) within the preceding 3 years. We did not identify any circumstances (e.g., current repayment agreement) that would allow the company to make such certification. We provided a draft of our report to FMS, IRS, and the Recovery Accountability and Transparency Board (Recovery Board) for review and comment. FMS and IRS provided technical comments which were incorporated into this report. IRS further noted that it had taken enforcement and collection actions in all of the 15 cases we investigated. This included filing federal tax liens to protect the government's interest in 13 of the 15 cases, and investigating and asserting the TFRP in 12 of the 15 cases. Of the 15 cases, 6 have established installment agreements to pay their outstanding tax liabilities. Except in cases of bankruptcy or where it has been determined that there is currently no meaningful collection potential, IRS is actively investigating and pursuing collection in the remaining cases. We received written comments on a draft of this report from the RATB Director, Accountability (see app. II). The Director stated that, as we acknowledged in our report, federal law places considerable restrictions on the disclosure of taxpayer information by IRS to other federal entities, including the Recovery Board. He further stated that should such access to such taxpayer information be made available to the Recovery Board, they could more proactively work to prevent fraud, waste, and abuse of government funds. As far back as 1992, we have said that Congress should consider whether tax compliance should be a prerequisite for receiving a federal contract. In 2004, we recommended that the Director of OMB develop and pursue policy options (in accordance with restrictions on the disclosure of taxpayer information) for prohibiting federal contract awards to contractors in cases in which abuse to the federal tax system has occurred and the tax owed is not contested. Options could include designating such tax abuse as a cause for governmentwide debarment and suspension or, if allowed by statute, authorizing IRS to declare such businesses and individuals ineligible for government contracts. We continue to support efforts to implement this recommendation. As agreed with your offices, unless you publicly release its contents earlier we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of this report to the Secretary of the Treasury, the Commissioner of the Financial Management Service, the Commissioner of Internal Revenue, the Chairman of the Recovery Accountability and Transparency Board and other interested parties. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this report, please contact Gregory D. Kutz at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Our objectives were to: (1) determine, to the extent possible, the magnitude of known tax debt which is owed by Recovery Act contract and grant recipients; and (2) provide examples of Recovery Act contract and grant recipients who have known unpaid federal taxes. To determine, to the extent possible, the magnitude of known tax debt owed by Recovery Act contract and grant recipients, we obtained and analyzed quarterly recipient reports submitted by contractors and grantees, as available through www.recovery.gov (Recovery.gov) through July 2010. Specifically, we obtained all contract and grant recipient reports from the fourth quarterly submission, and all reports from prior quarterly submissions that were marked as “final” by the recipients. Since Recovery.gov data do not contain taxpayer identification numbers (TINs) required for comparisons against IRS tax debt data, we obtained the Central Contractor Registry (CCR) database in order to obtain the TINs for Recovery Act contract and grant recipients. We matched the Data Universal Numbering System (DUNS) number available in the quarterly recipient reports with CCR to obtain the TINs for the Recovery Act contract and grant recipients. We were not able to match about 17,000 recipients in Recovery.gov to the CCR database. As such, those 17,000 recipients were not included in our analysis. We obtained and analyzed known tax debt data from the Internal Revenue Service (IRS) as of September 30, 2009. Using the TIN we electronically matched IRS’s tax debt data to the population of Recovery Act contract and grant recipient TINs. To avoid overestimating the amount owed by Recovery Act contract and grant recipients with known unpaid tax debts and to capture only significant tax debts, we excluded from our analysis tax debts meeting specific criteria to establish a minimum threshold in the amount of tax debt to be considered when determining whether a tax debt is significant. The criteria we used to exclude tax debts are as follows: tax debts IRS classified as compliance assessments or memo accounts for financial reporting, known tax debts from calendar year 2009 tax periods, and, recipients with total known unpaid taxes of $100 or less. The criteria above were used to exclude known tax debts that might be under dispute or generally duplicative or invalid, and known tax debts that are recently incurred. Specifically, compliance assessments or memo accounts were excluded because these taxes have neither been agreed to by the taxpayers nor affirmed by the court, or these taxes could be invalid or duplicative of other taxes already reported. We excluded known tax debts from calendar year 2009 tax periods to eliminate tax debt that may involve matters that are routinely resolved between the taxpayers and IRS, with the taxes paid or abated within a short time. We excluded tax debts of $100 or less because they are insignificant for the purpose of determining the extent of known taxes owed by Recovery Act recipients. Using these criteria, we identified at least 3,700 Recovery Act recipients with federal tax debt. To provide examples of Recovery Act recipients who have known unpaid federal taxes, we selected 15 of the approximately 3,700 Recovery Act recipients for a detailed audit and investigation. The 15 recipients were chosen using a nonrepresentative selection approach based on data mining. Specifically, we narrowed the 3,700 recipients with known unpaid taxes to 30 cases based on (1) the amount of known unpaid taxes (including income, payroll, and other taxes); (2) the number of delinquent tax periods; (3) location; and (4) potential disclosure issues. Because we considered the number of delinquent tax periods in selecting these 15 recipients, we were more likely to select recipients who owed primarily payroll taxes; our prior work has shown delinquent payroll taxes to be an indicator of potential abusive or criminal activity. For these 30 cases, we obtained and reviewed copies of automated tax transcripts and other tax records (for example, revenue officer’s notes) from IRS as of October 2010, and reviewed these records to exclude contractors or grantees that had recently paid off their unpaid tax balances and considered other factors before reducing the number of Recovery Act recipients to 15 case studies. We did not evaluate the status of collections activities related to penalties assessed against recipient organization officers, only those assessed against the recipient organization itself. Our investigators also contacted several of the recipients and conducted interviews. These case studies serve to illustrate the sizeable amounts of taxes owed by some organizations that received Recovery Act funding and cannot be generalized beyond the cases presented. We conducted this forensic audit and related investigation from July 2010 through April 2011. We performed this forensic audit in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our audit findings and conclusions based on our audit objectives. We performed our related investigative work in accordance with standards prescribed by the Council of the Inspectors General on Integrity and Efficiency. For the IRS unpaid assessments data, we relied on the work we performed during our annual audit of IRS’s financial statements. While our financial statement audits have identified some data reliability problems associated with tracing IRS’s tax records to source records and including errors and delays in recording taxpayer information and payments, we determined that the data were sufficiently reliable to address this report’s objectives. In previous GAO reports, we have reported that fieldwork and initial review and analysis of recipient data from www.recovery.gov indicated that there were a range of reporting and quality issues, such as erroneous or questionable data entries. However, the problems identified in our previous reviews have been associated with job data fields that are not relevant to this review. In addition, for the purposes of this review, we limited the population of recipient data we reviewed to records showing continuity in reporting as demonstrated by consistency in reporting over multiple periods and by excluding certain records containing known data inconsistencies. Therefore, we determined that the data were sufficiently reliable to address our engagement objectives. Medicare: Thousands of Medicare Providers Abuse the Federal Tax System. GAO-08-618. Washington, D.C.: June 13, 2008. Tax Compliance: Federal Grant and Direct Assistance Recipients Who Abuse the Federal Tax System. GAO-08-31. Washington, D.C.: November 16, 2007. Tax Compliance: Thousands of Organizations Exempt from Federal Income Tax Owe Nearly $1 Billion in Payroll and Other Taxes. GAO-07-1090T. Washington, D.C.: July 24, 2007. Tax Compliance: Thousands of Organizations Exempt from Federal Income Tax Owe Nearly $1 Billion in Payroll and Other Taxes. GAO-07-563. Washington, D.C.: June 29, 2007. Tax Compliance: Thousands of Federal Contractors Abuse the Federal Tax System. GAO-07-742T. Washington, D.C.: April 19, 2007. Medicare: Thousands of Medicare Part B Providers Abuse the Federal Tax System. GAO-07-587T. Washington, D.C.: March 20, 2007. Internal Revenue Service: Procedural Changes Could Enhance Tax Collections. GAO-07-26. Washington, D.C.: November 15, 2006. Tax Debt: Some Combined Federal Campaign Charities Owe Payroll and Other Federal Taxes. GAO-06-887. Washington, D.C.: July 28, 2006. Tax Debt: Some Combined Federal Campaign Charities Owe Payroll and Other Federal Taxes. GAO-06-755T. Washington, D.C.: May 25, 2006. Financial Management: Thousands of GSA Contractors Abuse the Federal Tax System. GAO-06-492T. Washington, D.C.: March 14, 2006. Financial Management: Thousands of Civilian Agency Contractors Abuse the Federal Tax System with Little Consequence. GAO-05-683T. Washington, D.C.: June 16, 2005. Financial Management: Thousands of Civilian Agency Contractors Abuse the Federal Tax System with Little Consequence. GAO-05-637. Washington, D.C.: June 16, 2005. Financial Management: Some DOD Contractors Abuse the Federal Tax System with Little Consequence. GAO-04-414T. Washington, D.C.: February 12, 2004. Financial Management: Some DOD Contractors Abuse the Federal Tax System with Little Consequence. GAO-04-95. Washington, D.C.: February 12, 2004. Debt Collection: Barring Delinquent Taxpayers From Receiving Federal Contracts and Loan Assistance, GAO/T-GGD/AIMD-00-167, Washington, D.C.: May 9, 2000. Unpaid Payroll Taxes: Billions in Delinquent Taxes and Penalty Assessments Are Owed. GAO/AIMD/GGD-99-211. Washington, D.C.: August 2, 1999. Tax Administration: Federal Contractor Tax Delinquencies and Status of the 1992 Tax Return Filing Season. GAO/T-GGD-92-23. Washington, D.C.: March 17, 1992.
The American Recovery and Reinvestment Act (Recovery Act), enacted on February 17, 2009, appropriated $275 billion to be distributed for federal contracts, grants, and loans. As of March 25, 2011, $191 billion of this $275 billion had been paid out. GAO was asked to determine if Recovery Act contract and grant recipients have unpaid federal taxes and, if so, to (1) determine, to the extent possible, the magnitude of known federal tax debt which is owed by Recovery Act contract and grant recipients; and, (2) provide examples of Recovery Act contract and grant recipients who have known unpaid federal taxes. To determine, to the extent possible, the magnitude of known tax debt owed by Recovery Act contract and grant recipients, GAO identified contract and grant recipients from www.recovery.gov and compared them to known tax debts as of September 30, 2009, from the Internal Revenue Service (IRS). To provide examples of Recovery Act recipients with known unpaid federal taxes, GAO chose a nonrepresentative selection of 30 Recovery Act contract and grant recipients, which were then narrowed to 15 based on a number of factors, including the amount of taxes owed and the number of delinquent tax periods. These case studies serve to illustrate the sizable amounts of taxes owed by some organizations that received Recovery Act funding and cannot be generalized beyond the cases presented. This report contains no recommendations. At least 3,700 Recovery Act contract and grant recipients--including prime recipients, subrecipients, and vendors--are estimated to owe more than $750 million in known unpaid federal taxes as of September 30, 2009, and received over $24 billion in Recovery Act funds. This represented nearly 5 percent of the approximately 80,000 contractors and grant recipients in the data from www.Recovery.gov as of July 2010 that GAO reviewed. Federal law does not prohibit the awarding of contracts or grants to entities because they owe federal taxes and does not permit IRS to disclose taxpayer information, including unpaid federal taxes, to federal agencies unless the taxpayer consents. The estimated amount of known unpaid federal taxes is likely understated because IRS databases do not include amounts owed by recipients who have not filed tax returns or understated their taxable income and for which IRS has not assessed tax amounts due. In addition, GAO's analysis does not include Recovery Act contract and grant recipients who are noncompliant with or not subject to Recovery Act reporting requirements. GAO selected 15 Recovery Act recipients for further investigation. For the 15 cases, GAO found abusive or potentially criminal activity, i.e., recipients had failed to remit payroll taxes to IRS. Federal law requires employers to hold payroll tax money "in trust" before remitting it to IRS. Failure to remit payroll taxes can result in civil or criminal penalties under U.S. law. The amount of unpaid taxes associated with these case studies were about $40 million, ranging from approximately $400,000 to over $9 million. IRS has taken collection or enforcement activities (e.g., filing of federal tax liens) against all 15 of these recipients. GAO has referred all 15 recipients to IRS for further investigation, if warranted.
DOD has been trying to successfully implement the working capital fund concept for over 50 years. However, Congress has repeatedly noted weaknesses in DOD’s ability to use this mechanism to effectively control costs and operate in a business-like fashion. The Secretary of Defense is authorized by 10 U.S.C. 2208 to establish working capital funds. The funds are to recover the full costs of goods and services provided, including applicable administrative expenses. The funds generally rely on sales revenue rather than direct appropriations or other funding sources to finance their operations. This revenue is then used to procure new inventory or provide services to customers. Therefore, in order to continue operations, the fund should (1) generate sufficient revenue to cover the full costs of its operations and (2) operate on a break- even basis over time–that is, not have a gain or incur a loss. In fiscal year 2001, the Defense Working Capital Fund—which consisted of the Army, Navy, Air Force, Defense-wide, and Defense Commissary Agency working capital funds—was the financial vehicle used to buy about $70 billion in defense commodities including fuel. The Defense Energy Support Center, as a subordinate command of DLA, buys fuel from oil companies for its customers. Military customers primarily use operation and maintenance appropriations to finance these purchases. In fiscal year 2001, reported fuel sales totaled about $4.7 billion, with the Air Force being the largest customer, purchasing about $2.7 billion. Each year the Office of the Under Secretary of Defense (Comptroller) faces the challenge of estimating and establishing a per barrel price for its fuel and other fuel-related commodities that will closely approximate the actual per barrel price during budget execution, almost a year later. The Office of the Under Secretary of Defense (Comptroller) establishes the stabilized annual price based largely upon the market price of crude oil as estimated by the Office of Management and Budget, plus a calculated estimate of the cost to refine. To this price is added other adjustments directed by Congress or DOD and a surcharge for DLA overhead and the operational costs of the Defense Energy Support Center. The services annually use these stabilized prices and their estimated fuel requirements based on activity levels (such as flying hours, steaming days, tank miles, and base operations) in developing their fuel budget requests. Figure 2 generally illustrates the process and the main organizations involved in budgeting for fuels. The stabilized annual fuel prices computed by DOD have varied over the years, largely due to volatility in the price of crude oil. For example, the stabilized annual fuel price and the Office of Management and Budget’s estimated crude oil price, on which the stabilized price was based for fiscal years 1993 through fiscal year 2003, are shown in figure 3. The stabilized fuel price for each budget year remains unchanged until the next budget year, to provide price stability during budget execution. According to DOD’s Financial Management Regulation, differences between the budget year price and actual prices occurring during the execution year should increase or decrease the next budget year’s price. However, according to DOD’s Financial Management Regulation, fund losses can occasionally be covered by obtaining an appropriation from Congress or by transferring funds from another DOD account. DOD is also authorized to move money out of the fund by annual appropriation acts. These acts limit the amount of funds that can be moved and the purposes for which the funds can be used. Specifically, money can only be removed from the fund for higher priority items, based on unforeseen military requirements, than those for which originally appropriated and cannot be used for items previously denied by Congress. These acts also require the Secretary of Defense to notify Congress of transfers made under this authority. The stabilized annual fuel prices used in the services’ budget requests to Congress do not reflect the full cost of fuel because of cash movements (adjustments) and inaccurate surcharges. Therefore, the services’ budgets for fuel may be greater or less than needed and funds for other readiness needs may be adversely affected. Based on our review of Office of Management and Budget and Defense Energy Support Center methodologies, the crude and refined oil price components appeared reasonable (see app. I for details). However, in fiscal years 1993-2002, cash movements into and out of the fund (adjustments) amounting to over $4 billion, while disclosed to Congress in DOD budget documents, were used for other purposes rather than to lower or raise prices. Some of the cash was moved at the direction of Congress and some at the direction of DOD. Congress makes such decisions as part of its budget deliberations. While authorized to move funds, DOD did not provide Congress with any rationale for the movements based on the limitations in the applicable appropriations acts. Identifying the rationale for moving these funds would be helpful to DOD and congressional decisionmakers as part of the budget review process. Removing money from the fund, which could be used to reduce future fuel prices, causes future service appropriations to be higher than they otherwise would be. In addition, the estimated surcharge component of the price used in budgeting was consistently higher than actual; it did not contain all costs; and in some cases, the costs were not adequately supported. Substantial cash movements (adjustments) into and out of the fund, while disclosed to Congress in budget documents, have kept prices from reflecting the full cost of fuel and affected the development of future years’ stabilized annual fuel prices. As a result, the fuel-related portion of the services’ operation and maintenance budgets totaled about $2.5 billion too high in 5 fiscal years and about $1.5 billion too low in another. The cash taken out of the fund went for the services’ operation and maintenance and other nonfuel-related expenses. Further, Congress provided a $1.56 billion emergency supplemental appropriation in fiscal year 2000 to help offset a loss due to a worldwide increase in crude oil prices. This was necessary because DOD had established a stabilized price of $26.04 per barrel but the actual cost that year was $48.58 per barrel. This appropriation allowed DOD to avoid recovering the loss through a price increase. Figure 4 shows the various fuel-related cash movements during fiscal years 1993 through 2002. Table 1 shows the various cash movements out of the working capital fund from fiscal years 1993 through 2002. In total, about $2.5 billion of fuel-generated funds was removed from the fund. Of this amount, $0.5 billion was used to pay for specific nonfuel-related expenses such as the Counter Drug Effort. The remaining $2.0 billion was used to meet the services’ other operation and maintenance needs. In reviewing these cash movements, we noted that DOD had notified Congress. However, when doing so, DOD did not provide rationale for the cash movements based on the law, which stipulates that the authority for such movements may not be used, unless for higher priority items, based on unforeseen military requirements, and where the item for which the funds are requested has not been previously denied by Congress. As a good management practice, such rationale, along with other information, such as the impact on future prices, would serve to provide more visibility to cash movements. In fact, in one instance, the Senate Appropriations Committee disallowed the $125-million request created when DOD moved these funds from the Defense-wide Working Capital Fund to cover Air Force Working Capital Fund losses. The Senate Appropriations Committee Report on the Department of Defense Appropriation Bill, 2002 and Supplemental Appropriations, 2002, stated that it could not support such a cash movement because it was inconsistent with DOD’s existing policies for recovering working capital fund losses. As a result, the committee reduced the appropriation to DOD’s working capital fund by that amount. Table 2 shows the effect of these cash movements on the stabilized annual fuel price if they had been used to lower or raise future year prices. Cash removed in 5 years caused the services’ fuel budgets to be about $2.5 billion higher than necessary because the prices could have been lowered. For example, $800 million removed in fiscal year 2001 caused the stabilized price in fiscal year 2003 to be $7.27 per barrel higher than necessary. As a result, the services’ fiscal year 2003 fuel budgets were overstated by $800 million. However, in fiscal year 2000, a $1.43 billion net cash movement into the fund caused the fiscal year 2002 stabilized price to be $12.99 per barrel lower than necessary to recover the full cost. As a result, the services’ fiscal year 2002 budgets were understated by $1.43 billion. While military service comptroller officials responsible for managing fuel costs for each service stated that they were aware that DOD sets the stabilized annual fuel price that they must use in the budget process, they believed any gains in 1 year were being used to lower future fuel prices. These officials were not aware that funds generated from fuel sales in 1 year were being used to pay for nonfuel-related DOD needs. In their view, lower prices would have allowed them to use more of their operation and maintenance funds for other priorities. The estimated surcharge portion of the price supporting budget requests has not accurately accounted for fuel-related costs consistent with DOD’s Financial Management Regulation. The surcharges were consistently higher than actual but did not include all costs. Furthermore, some costs were not adequately supported. These problems were due to deficient methodologies and record-keeping. As a result the stabilized annual prices and resulting services’ budgets were inaccurate. Consistent surcharge overstatements caused the stabilized annual price of fuel to be higher than necessary and cost customers on average about $99 million annually from fiscal years 1993 through 2001. Our analysis of the surcharge costs shows that the estimated obligations exceeded actual obligations for every year from fiscal years 1993 through 2001 except for fiscal year 1999 as shown in table 3 below. We recognize that variances will occur between estimated and actual surcharge obligations. Differences, however, should be assessed annually and appropriate adjustments made to the next year’s surcharge. We found that no adjustments for these overcharges, as required by DOD’s Financial Management Regulation, were made in fiscal years 1994 through 2001. After we brought this to DOD’s attention, adjustments were made when computing the fuel price for fiscal years 2002 and 2003. The surcharges, however, did not include all required costs. Inventory losses were not included in the surcharge as required by DOD’s Financial Management Regulation. For fiscal years 1993 through 2000, these losses ranged from $12.0 million to $27.5 million a year. Adding these losses would have increased surcharges by about 9 to 23 cents per barrel. While officials stated that inventory losses were a factor in determining the number of barrels to be purchased, this practice does not comply with DOD’s regulation, which stipulates that inventory losses should be included in the surcharge. Our analysis of the estimated surcharge components disclosed that support for some costs was inadequate. We found that DLA had inadequate support for its $40-million annual headquarters overhead charge that is passed on to the Defense Energy Support Center. This amount equated to over 5 percent of the fiscal year 2002 and 7 percent of the fiscal year 2003 surcharges. While DLA has a methodology for allocating its overhead costs to the affected business activities, we could not verify/validate the portion that was assessed to the center. As a result, we could not determine whether the Defense Energy Support Center was charged the appropriate amount. This is of particular concern because in the most recent budget submission for fiscal year 2003, DLA requested a $16.9 million increase in its overhead charges to the center. The Office of the Under Secretary of Defense (Comptroller) refused to grant the increase because it did not believe the increase was merited. Furthermore, the Defense Energy Support Center could not provide support for the $342 million terminal operations component cost for fiscal years 1997 and 1998. There was also about a $2 million difference between supporting documentation and the budgeted amount for depreciation in fiscal year 2001. The Defense Energy Support Center could not support any of the component costs prior to fiscal year 1997. According to officials, this documentation was not maintained during the move to their current location. Fuel prices have not reflected full costs. Fund cash balances have been used by Congress, and to a lesser extent DOD, to meet other budget priorities. Given the volatility in crude oil prices, these cash balances are DOD’s primary means of annually dealing with drastic increases and decreases in fuel costs. Furthermore, DOD has removed cash from the fund without providing Congress with a rationale based on appropriation act language. In one recent instance, Congress reversed one of DOD’s cash movement decisions. DOD also has not calculated surcharges consistent with the governing financial management regulation. To improve the overall accuracy of DOD’s fuel pricing practices, we recommend that the Secretary of Defense direct DOD’s comptroller to: Provide a rationale to Congress, consistent with language in the applicable appropriations act, to support the movement of funds from the working capital fund and to identify the effect on future prices. Require DLA and the Defense Energy Support Center to develop and maintain sound methodologies that fully account for the surcharge costs consistent with DOD’s Financial Management Regulation and maintain adequate records to support the basis for all surcharge costs included in the stabilized annual fuel price. DOD generally concurred with the recommendations, but provided explanatory comments on each one. With regard to our recommendation that it provide Congress the rationale for cash movements, DOD stated that information is already being provided through formal and informal means that it believes are sufficient to report why cash was moved. We recognize this may be occurring; however, we believe that to improve visibility of fund operations, it is reasonable to provide a formal record of the rationale to fully disclose and account for each cash movement. Such a formal record does not exist; therefore, we continue to believe our recommendation is appropriate. In concurring with the recommendation to maintain adequate records, DOD expressed concern about how long to retain them and proposed 5 years. We believe DOD’s proposal represents a reasonable timeframe consistent with our recommendation. In its cover letter conveying the recommendations, DOD stated our report overlooks the fact that while covering gains or losses to the fund by either decreasing or increasing fuel prices the next year is a basic principle, it is not often practical to rely exclusively on this principle when establishing such prices because of transfers into and out of the fund. We disagree. While our report points out that under the working capital fund concept fuel prices should cover gains and loses, it also acknowledges that there have been numerous transfers. Our point is that to ensure fund accountability when such transfers occur, DOD’s fuel pricing practices should include providing Congress a full disclosure of the rationale for the transfer and its impact on the price. Otherwise, the ability of the working fund to effectively control and account for costs of goods and services is compromised. DOD’s comments are printed in appendix II. DOD also provided technical comments, which we have incorporated as appropriate. We performed our review in accordance with generally accepted government auditing standards. Further details on our scope and methodology can be found in appendix I. We are sending copies of this report to the Senate Committee on Governmental Affairs; House Committee on Government Reform; Senate and House Committees on the Budget; and other interested congressional committees; the Secretary of Defense; and the Director, Defense Logistics Agency. Copies will also be made available to others upon request. In addition, the report will be available at no cost on the GAO Web site at http://www.gao.gov. If you or your staff have questions concerning this report, please contact us on (202) 512-8412. Staff acknowledgements are listed in appendix III. In assessing the accuracy of DOD’s stabilized annual fuel prices from fiscal years 1993-2003, we reviewed each of the four components—crude oil cost estimates, cost to refine, adjustments, and surcharges—and identified the major offices, DOD organizations, and other components involved in pricing. For the crude oil cost estimate component, we reviewed the Office of Management and Budget’s methodology for estimating crude oil prices. We discussed the Office of Management and Budget’s methodology with the analyst that prepares the forecasted crude oil prices. We also reviewed the Office of Management and Budget’s use of West Texas Intermediate crude oil futures prices and the historical relationships between those prices and domestic, imported, and composite crude oil prices in making crude oil price forecasts. We concluded that this approach was reasonable. For the cost to refine component, we reviewed the Defense Energy Support Center’s methodology for calculating refined costs. In assessing the Defense Energy Support Center’s methodology, we relied on our previous analysis of its regression equation and a suggested change that was adopted. This same methodology was being used as of May 2002 and remains reasonable. For the third component of fuel pricing—adjustments—we discussed and examined Office of the Under Secretary of Defense (Comptroller) documents related to stabilized annual fuel prices and applicable Program Budget Decisions to determine what costs were included in the component. To determine criteria, we reviewed the applicable portions of DOD’s Financial Management Regulation and the legislative history pertaining to the creation of revolving funds since 1949. To identify any fuel-related cash movements into or out of the working capital fund that occurred and might have affected adjustments, we interviewed various DOD officials and obtained and reviewed the applicable appropriations acts and the committee and conference reports on those acts. We analyzed the results, developed a methodology for determining the effect, and discussed our conclusions with various DOD program and budget officials. Finally, for the fourth component of fuel pricing—surcharges—we obtained, reviewed and discussed DLA and Defense Energy Support Center methodologies and documentation used in computing the estimated and actual surcharge costs. To identify criteria for what surcharge costs should include, we obtained and reviewed DOD’s Financial Management Regulation and any other policies and procedures governing or affecting fuel pricing. To determine whether the support for the surcharge costs was adequate, we requested, reviewed, and analyzed pertinent documentation and records supporting budgeted and actual obligations for each surcharge element for fiscal years 1993-2003. However, officials were unable to provide support for estimated surcharge costs from fiscal years 1993-1996 and were unable to provide support for several actual costs for fiscal years 1993 and 1994. We met with and/or contacted various program and budget officials within the Office of the Secretary of Defense; Office of Management and Budget; DLA Headquarters; Defense Energy Support Center; and the various military services. We performed our work from June 2001 to April 2002 in accordance with generally accepted government auditing standards. As part of our review, we examined DOD’s Financial Management Regulation to ensure that it incorporated the Statement of Federal Financial Accounting Standards (SFFAS) No. 4 “Managerial Cost Accounting Standards” (Feb. 28, 1997). We did not independently verify DOD’s financial information used in this report. Prior GAO and Department of Defense Inspector General audit reports and Federal Manager’s Financial Integrity Act reports have identified inadequacies in the fund’s accounting and reporting. As discussed in our report on the results of our review of the fiscal year 2001 Financial Report of the U.S. Government, DOD’s financial management deficiencies, taken together, continue to represent the single largest obstacle to achieving an unqualified opinion on the U.S. government’s consolidated financial statements. In addition to those named above, Bob Coleman, Jane Hunt, Patricia Lentini, Charles Perdue, Greg Pugnetti, Chris Rice, Gina Ruidera, Malvern Saavedra, and John Van Schaik made key contributions to this report.
The Department of Defense (DOD) Defense Working Capital Fund was used to buy $70 billion in commodities in fiscal year 2001. This amount is estimated to grow to $75 billion for fiscal year 2003. The department's financial management regulation states that fund activities will operate in a business-like fashion and incorporate full costs in determining the pricing of their products. The National Defense Authorization Act for Fiscal year 2001 requires that GAO review the working capital fund activities to identify any potential changes in current management processes or policies that would result in a more efficient and economical operation. The act also requires that GAO review the Defense Logistics Agency's (DLA) efficiency, effectiveness, and flexibility of operational practices and identify ways to improve services. One such DLA activity, the Defense Energy Support Center, sold $4.7 billion of various petroleum-related products to the military services in fiscal year 2001. DOD's fuel prices have not reflected the full cost of fuel as envisioned in the working capital fund concept because cash movements to the fund balance and surcharge inaccuracies have affected the stabilized annual fuel prices. Over $4 billion was moved into and out of the working capital fund from fiscal year 1993 to 2002. These adjustments affected the extent to which subsequent years' prices reflected the full cost of fuel. In addition, the surcharges did not accurately account for fuel-related costs as required by DOD's Financial Management Regulation.
To examine variation in MA disenrollment by contract, we examined CMS enrollment and disenrollment data for 2014—the most recent year of data available at the time of our analysis—for 252 contracts. We selected these 252 contracts as they had at least 100 disenrollees in poor health and 100 disenrollees in better health as well as had at least 50 percent of the individual measures in the 2016 MA Five-Star Rating System—which largely reflected performance in 2014. These contracts accounted for 80 percent of the 17.5 million beneficiaries enrolled in an MA contract that year. We calculated the disenrollment rate for each of these contracts and designated the 126 contracts above the median rate of 10.6 percent as having relatively high disenrollment rates and the 126 contracts below the median rate as having relatively low disenrollment rates. (See appendix I for more details on the selection of the contracts included in our study.) To determine the extent of health-biased disenrollment, if any, among MA contracts, we focused our analysis on the 126 MA contracts with higher disenrollment rates. We used CMS risk score data to identify beneficiaries in poor health and beneficiaries in better health. Specifically, beneficiaries whose projected spending was at least twice as much as that for the average Medicare beneficiary were characterized as being in poor health, while the remaining beneficiaries with projected spending less than that amount were considered to be in better health. Using this information for each of the 126 contracts, we then calculated disenrollment odds ratios to determine the likelihood that beneficiaries in poor health disenrolled from the contract compared to beneficiaries in better health. For example, an odds ratio of 1.50 signifies that those in poor health were 50 percent more likely to disenroll than those in better health. We deemed contracts with an odds ratio over 1.25—where the likelihood that poor health beneficiaries disenrolled was more than 25 percent greater than that of beneficiaries in better health—as having health-biased disenrollment. We deemed those contracts with an odds ratio of 1.25 or less as lacking health-biased disenrollment. To examine the characteristics of contracts with and without health- biased disenrollment, we focused our analysis on the 126 MA contracts with higher disenrollment rates in 2014. We analyzed CMS data for 2014 on a number of contract variables, including enrollment size and the number of years of experience in the MA program. In addition, we compared the quality ratings of contracts in each group based on data from the MA Five-Star Rating System. These data comprise an overall star rating for each MA contract as well as each contract’s performance scores on up to 47 individual measures—such as controlling blood pressure—grouped within 9 domains—such as managing chronic conditions. We determined the median score for individual performance measures for all contracts in the rating system. For both the contracts with and without health-biased disenrollment, we then determined the percentage of measures within each domain that had better than median scores; we characterized these contracts as having relatively high quality. To examine the reasons beneficiaries chose to disenroll from contracts with and without health-biased disenrollment, we analyzed CMS’s Disenrollment Reasons Survey reports. The reports are compiled for MA contracts based on surveys sent to a representative sample of disenrollees to learn about why they elected to leave their plan. CMS combined survey responses into one of five composite reasons for disenrollment: problems with costs, problems with drug coverage, problems getting information on drugs, problems getting needed care, and preferred providers not in network. For each of the 126 MA contracts with higher disenrollment rates, we compared the average percentage of respondents who identified each composite reason, for contracts with and without health-biased disenrollment. To examine how, if at all, CMS identifies contracts with health-biased disenrollment as part of its routine oversight of MA contracts, we reviewed the agency guidance and data provided to its regional offices and interviewed CMS officials. In addition, we compared the set of contracts identified by CMS as having potential problems in 2014 with the contracts we identified as having health-biased disenrollment. We examined CMS’s oversight in the context of relevant standards for internal control in the federal government. We assessed the reliability of the data from CMS that we analyzed by reviewing relevant documentation and examining the data for obvious errors. We determined that the data were sufficiently reliable for the purposes of our reporting objectives. We conducted this performance audit from November 2015 to April 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The MA program—also known as Medicare Part C—is the private plan alternative to the traditional Medicare program. Instead of paying providers’ claims directly, CMS contracts with MAOs to assume the risk of providing health benefits to beneficiaries in exchange for fixed monthly payments. The payment amounts vary, in part, depending on the relative health status of the MAO’s beneficiaries as compared to the health status of an average Medicare beneficiary. CMS paid almost $170 billion to MAOs in 2015. MAOs must provide coverage for all traditional Medicare services and include a yearly limit on out-of-pocket costs. Most plans also include prescription drug coverage offered under Medicare Part D. Plans may offer more generous benefits, such as less cost sharing and additional covered services, such as vision or dental care. To control utilization, plans may impose referral requirements and implement care coordination programs. MA beneficiaries’ access to providers is generally limited to a network of physicians, hospitals, and others that contract with their MAO. If a given physician or hospital is not in the MA plan’s network, the beneficiary’s out-of-pocket cost to use that physician or hospital may be considerably higher than the cost associated with using providers in the plan’s network. Beneficiaries may choose from the plans available in their county, which may be provided by multiple contacts. The contracts represent various types of plans, including HMOs, PPOs, and private fee-for-service (PFFS) plans. HMOs, which according to CMS, accounted for nearly three-fourths of MA enrollment in 2016, generally restrict beneficiary access to providers in their network. PPOs, which according to CMS, accounted for nearly one-fourth of 2016 MA enrollment, also have networks, but allow beneficiaries access to non-network providers by paying higher cost sharing amounts. In contrast, PFFS plans, which accounted for 1 percent of MA enrollment in 2016, generally offer a wider choice of providers. A subset of HMOs and PPOs are special needs plans (SNP) which provide care for beneficiaries in one of three classes of special needs. Medicare beneficiaries can enroll in a SNP if they are dually eligible for Medicare and Medicaid, require an institutional level of care, or have a severe or chronic condition. While beneficiaries are generally locked into their MA plan for a year (January through December), they may voluntarily leave their plan at certain times in the year or if they meet certain criteria. During the annual open enrollment period, from October 15 to December 7, MA beneficiaries may change their MA plan selection or join traditional Medicare. This is followed by the MA disenrollment period, from January 1 to February 14, when MA beneficiaries may join traditional Medicare and are allowed to select a drug plan to go with their new coverage. In addition, CMS has special enrollment periods where MA beneficiaries may change their enrollment under certain circumstances. For example, Medicare beneficiaries can switch to a contract with an overall Five-Star rating of 5 from December 8 to the following November 30. In addition, dual-eligible beneficiaries may change their MA enrollment on the first day of any month. In 2014, disenrollment rates varied widely among the 252 contracts in our analysis, ranging from 1 to 39 percent. (See fig. 1.) The 126 contracts with high disenrollment rates—those with rates above the median rate of 10.6 percent—accounted for 38 percent of the MA population in our study. Moreover, these contracts accounted for over two-thirds of total disenrollment in our population. Nineteen percent of this group of contracts—24 contracts—had disenrollment rates of 20 percent or greater. In contrast, contracts with relatively low disenrollment rates accounted for 62 percent of the MA population in our study. Nearly half of these contracts had disenrollment rates at or below 5 percent. Among the 126 contracts with higher disenrollment rates, we found that 35 contracts had health-biased disenrollment—meaning that beneficiaries in poor health were substantially more likely to leave their contracts than those in better health. These contracts accounted for 15 percent of beneficiaries in higher disenrollment contracts, or approximately 810,000 beneficiaries. For these 35 contracts, on average, beneficiaries in poor health were 47 percent more likely to disenroll relative to beneficiaries in better health. For individual contracts, this percentage ranged from 27 to 126 percent. Among the remaining 91 contracts with higher disenrollment rates, we did not find evidence of health-biased disenrollment—meaning that in these contracts, beneficiaries in poor health had, on average, odds of disenrollment similar to beneficiaries in better health. Specifically, both disenrollees in poor health and disenrollees in better health had a 1 in 5 chance of disenrolling from their contract. In total, the 91 contracts accounted for 4.5 million beneficiaries, or 85 percent of the enrollment in the MA contracts with higher rates of disenrollment in our review. We found several notable differences when comparing the characteristics of the 35 contracts with health-biased disenrollment with the 91 contracts without health-biased disenrollment. Specifically, the 35 health-biased disenrollment contracts were more likely to have the following: Lower enrollment. Sixty-nine percent of health-biased disenrollment contracts had fewer than 15,000 enrollees. This percentage was lower for contracts without health-biased disenrollment—25 percent. In addition, a smaller percentage of health-biased disenrollment contracts—11 percent—had enrollment that exceed 50,000 beneficiaries. In contrast, 29 percent of contracts without health- biased disenrollment had enrollments that large. Higher proportion of HMOs. Ninety-one percent of health-biased disenrollment contracts were HMOs—which feature closed provider networks—while only 9 percent were PPOs. In contrast, for contracts without health-biased disenrollment, 70 percent were HMOs and 25 percent were PPOs. Larger share of SNP enrollees. Contracts with health-biased disenrollment tended to have a higher proportion of beneficiaries in SNPs—which provide targeted care for special needs individuals, such as those with chronic conditions. On average, 37 percent of these contracts had a majority of their beneficiaries in SNPs compared with 21 percent, on average, for the contracts without health-biased disenrollment. Less time in MA program. The contracts with health-biased disenrollment had, on average, fewer years in the MA program compared to the contracts without health-biased disenrollment—an average of 8 years compared to 12 years, respectively. (See table 1 for a comparison of the characteristics of contracts with and without health-biased disenrollment.) Our analysis of data from CMS’s MA Five-Star Rating System showed that the contracts with health-biased disenrollment generally had lower overall quality ratings than contracts without health-biased disenrollment. (See fig. 2.) Among the 126 contracts with higher disenrollment rates, nearly two-thirds of the health-biased contracts had three or fewer stars compared to about one-fourth of the contracts without health-biased disenrollment. Furthermore, only 11 percent of contracts with health-biased disenrollment had four or more stars compared to 32 percent of the contracts without health-biased disenrollment. In addition, the contracts with health-biased disenrollment scored lower than the contracts without health-biased disenrollment across each of the nine performance domains in the MA Five-Star Rating System. We found a smaller share of the 35 contracts with health-biased disenrollment had better than median quality scores when compared to the 91 contracts without health-biased disenrollment. For example, only 36 percent of the contracts with health-biased disenrollment had better than median scores on managing chronic (long-term) conditions, which include measures on blood pressure and diabetes care. In contrast, 52 percent of the contracts without health-biased disenrollment performed above the median on this performance domain. (See table 2.) Our review of CMS’s Disenrollment Reasons Survey reports showed that beneficiaries who disenrolled from the 35 contracts with health-biased disenrollment tended to report that they did so for reasons related to provider coverage. In contrast, beneficiaries who disenrolled from the 91 contracts without health-biased disenrollment tended to report that they left their contracts for reasons related to the cost of care. (See fig. 3.) Specifically, we found the following: Beneficiaries who left the 35 contracts with health-biased disenrollment commonly reported disenrolling because their preferred doctor or hospital was not covered by their MA contract. This reason was cited by 41 percent of surveyed disenrollees, on average, across the contracts with health-biased disenrollment. In contrast, the same reason was cited by 25 percent of surveyed disenrollees, on average, across the contracts without health-biased disenrollment. Beneficiaries in contracts with health-biased disenrollment were more likely to report problems obtaining needed care, obtaining information on drugs, and with drug coverage. For example, on average, 27 percent of surveyed disenrollees from contracts with health-biased disenrollment reported difficulty getting needed care. In contrast, 16 percent of surveyed disenrollees from contracts without health-biased disenrollment cited these reasons. Beneficiaries who left the 91 contracts without health-biased disenrollment were more likely to report financial reasons for disenrolling. On average, 28 percent of disenrollees from these contracts identified problems with costs, compared with 18 percent of disenrollees from contracts with health-biased disenrollment. For example, when asked whether the presence of another plan that costs less was a reason for disenrolling, 45 percent of disenrollees from the contracts without health-biased disenrollment cited this reason, compared with 27 percent of beneficiaries who left contracts with health-biased disenrollment. CMS does not identify patterns of disenrollment by beneficiary health status in its routine oversight of MA contracts. Account managers in the 10 regional offices are the CMS officials responsible for overseeing these contracts. To do so, CMS officials told us they follow a standard performance monitoring protocol designed to determine whether the contracts adhere to all program requirements and need additional scrutiny. As part of their review, the account managers examine a variety of contract performance data, including MA Five-Star ratings, beneficiary complaint rates, and data on significant changes in drug coverage. Overall contract disenrollment rates are included in the MA Five-Star Rating System provided to account managers to identify contracts that may need closer scrutiny. However, CMS officials told us these rates do not include information on beneficiary health status. In addition, CMS’s account managers do not use the information CMS collects in the Disenrollment Reasons Survey in their oversight of MA contracts. The survey asks beneficiaries about the reasons they have disenrolled from their MA plan, and CMS officials told us that that CMS develops the survey reports and distributes them to MAOs annually to help them facilitate quality improvement efforts. The survey results are also made available to the public on CMS’s Medicare Plan Finder website so that beneficiaries considering enrollment in an MA plan can learn why beneficiaries have chosen to leave a particular plan. Given the data account managers use in their oversight of MA contracts, CMS is unlikely to consistently identify contracts with health-biased disenrollment as needing extra scrutiny. As part of its ongoing analysis of contract performance data, CMS identified 63 contracts as potentially requiring additional scrutiny in 2014. However, this list included only 9 of the 35 contracts we identified as having health-biased disenrollment. CMS classified 2 of the 9 contracts as potentially requiring what the agency describes as “intensive monitoring,” which may include dedicated monthly meetings between the account managers and MAO representatives to discuss problem areas. CMS identified the other 7 contracts as requiring some additional monitoring, which may include at least one meeting between the account manager and MAO representatives. CMS has available data that its account managers could use to monitor contract disenrollment rates by beneficiary health status. Disenrollment rates are one of the measures used in the MA Five-Star Rating System; we used CMS’s beneficiary risk scores, which are based on demographic and diagnosis information, to identify beneficiaries in poor and better health; and the Disenrollment Reasons Survey provides information on why beneficiaries disenroll from their plans. As we have shown, contracts with health-biased disenrollment had lower quality scores, and beneficiaries who disenrolled from these contracts more commonly cited problems with coverage of preferred doctors and hospitals as well as problems getting access to care as leading reasons for disenrolling. As a result, the survey data could be used in conjunction with the other available data to reveal unique information about contract performance that other data do not show. By not analyzing disenrollment rates for signs of potential health-biased disenrollment, CMS account managers may fail to identify problems in MA contract performance. This poses a risk to beneficiaries, given that MA contracts are prohibited from limiting or conditioning their coverage or provision of benefits based on health status and must ensure adequate access to covered services for all beneficiaries. CMS’s oversight is also inconsistent with federal internal control standards, which call for agencies to identify, analyze, and respond to risks. CMS is responsible for ensuring that all MA contracts offer care that meets applicable standards, regardless of beneficiary health status. However, as part of its routine oversight, CMS does not examine disenrollment rates by health status. Our analysis identified 35 contracts in 2014 where MA beneficiaries in poor health were more likely to disenroll than those in better health. These contracts with health-biased disenrollment had quality scores that were consistently and substantially below the scores of contracts without health-biased disenrollment. In addition, survey data indicate that beneficiaries who left these contracts reported problems with coverage of preferred doctors and hospitals as well as problems getting access to care as leading reasons they chose to leave their contracts. This type of information on disenrollment and beneficiary health status is available to CMS; however, by not leveraging it as part of its routine oversight of MA contracts, CMS is missing an opportunity to better target its oversight activities toward MA contracts that may not be adequately meeting the health care needs of all beneficiaries, particularly those in poor health. To strengthen CMS’s oversight of MA contracts, the Administrator of CMS should review data on disenrollment by health status and the reasons beneficiaries disenroll as part of the agency’s routine monitoring efforts. We provided a draft of this report to HHS for comment. In its written comments, which are reprinted in appendix II, HHS concurred with our recommendation. HHS noted that it currently uses disenrollment data in its review of MA plan quality and performance and will continue to consider ways of incorporating disenrollment data in its oversight. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Secretary of Health and Human Services. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Using data from the Centers for Medicare & Medicaid Services (CMS), we examined enrollment and disenrollment in 2014—the most recent year of data available at the time of our analysis. Of the 732 Medicare Advantage (MA) contracts in 2014, we excluded 480 contracts, with 3.6 million beneficiaries, from our analysis. These were contracts with fewer than 100 beneficiaries in poor health who disenrolled from the contract in 2014, fewer than 100 beneficiaries in better health who disenrolled from the contract in 2014, and contracts for which CMS reported fewer than 50 percent of the individual measures in the 2016 MA Five-Star Rating System—which largely reflected performance in 2014. These contracts were excluded because they did not have a sufficient number of Five-Star ratings, or had low contract enrollment. The remaining 252 contracts accounted for 79.7 percent of the 17.5 million beneficiaries enrolled in an MA contract in 2014. We then ranked the 252 contracts in terms of their total disenrollment rate, identifying the 126 contracts with disenrollment rates higher than the median of 10.6 percent as having relatively high disenrollment. In our analysis we focused on these 126 contracts because they had relatively high disenrollment as contracts with below median disenrollment may not warrant the same level of oversight scrutiny as those with higher rates. (See fig. 4.) In addition to the contact named above, GAO staff who made key contributions to this report include Rosamond Katz (Assistant Director), Richard Lipinski (Assistant Director), Will Crafton (Analyst-in-Charge), and Betsy Conklin. Also contributing were Krister Friday and George Bogart.
In 2016, over 30 percent of Medicare beneficiaries were enrolled in the MA program. Each year beneficiaries have an opportunity to join or leave their MA plan. GAO was asked to review MA disenrollment by health status and CMS oversight. This report examines, among other issues, (1) the extent of any health-biased disenrollment, (2) beneficiaries' reasons for leaving contracts with and without health biased disenrollment, and (3) how, if at all, CMS identifies contracts with health-biased disenrollment, for routine oversight purposes. GAO analyzed 2014 disenrollment rates for the 252 MA contracts that had a sufficient number of disenrollees and met other criteria. For the 126 contracts with disenrollment rates above the median rate, GAO used beneficiaries' projected health care costs to identify those in poor health and better health. GAO examined data from CMS's Disenrollment Reasons Survey to learn why beneficiaries reported leaving the 126 contracts with relatively high disenrollment rates. GAO also interviewed CMS officials and compared their oversight to federal standards for internal control. Under the Medicare Advantage (MA) program, the Centers for Medicare & Medicaid Services (CMS) contracts with private entities to offer coverage for Medicare beneficiaries. GAO examined 126 contracts with higher disenrollment rates—above the median rate of 10.6 percent in 2014—and found 35 contracts with health-biased disenrollment. In these contracts, beneficiaries in poor health were substantially more likely (on average, 47 percent more likely) to disenroll relative to beneficiaries in better health. Such disparities in contract disenrollment by health status may indicate that the needs of beneficiaries, particularly those in poor health, may not be adequately met. GAO found that beneficiaries who left the 35 contracts with health-biased disenrollment tended to report leaving for reasons related to preferred providers and access to care. In contrast, beneficiaries who left the 91 contracts without health-biased disenrollment tended to report that they left their contracts for reasons related to the cost of care. CMS does not use available data to examine data on disenrollment by health status as part of its ongoing oversight; thus, CMS may fail to identify problems in MA contract performance, which poses a risk as contracts are prohibited from limiting coverage based on health status. CMS's oversight is inconsistent with internal control standards. To strengthen its oversight of MA contracts, CMS should examine data on disenrollment by health status and the reasons beneficiaries disenroll. HHS concurred with GAO's recommendation.
Despite many successes in the exploration of space, such as landing the Pathfinder and Exploration Rovers on Mars, NASA has had difficulty bringing a number of projects to completion, including several efforts to build a second generation reusable human spaceflight vehicle to replace the space shuttle. NASA has attempted several costly endeavors, such as the National Aero-Space Plane, the X-33 and X-34, and the Space Launch Initiative. While these endeavors have helped to advance scientific and technical knowledge, none have completed their objective of fielding a new reusable space vehicle. We estimate that these unsuccessful development efforts have cost approximately $4.8 billion since the 1980s. The high cost of these unsuccessful efforts and the potential costs of implementing the Vision make it important that NASA achieve success in its new exploration program beginning with the CEV project. Our past work has shown that developing a sound business case, based on matching requirements to available and reasonably expected resources before committing to a new product development effort, reduces risk and increases the likelihood of success. High levels of knowledge should be demonstrated before managers make significant program commitments, specifically: (1) At program start, the customer’s needs should match the developer’s available resources in terms of availability of mature technologies, time, human capital, and funding; (2) Midway through development, the product’s design should be stable and demonstrate that it is capable of meeting performance requirements; (3) By the time of the production decision, the product must be shown to be producible within cost, schedule, and quality targets, and have demonstrated its reliability. Our work has shown that programs that have not attained the level of knowledge needed to support a sound business case have been plagued by cost overruns, schedule delays, decreased capability, and overall poor performance. With regard to NASA, we have reported that in some cases the agency’s failure to define requirements adequately and develop realistic cost estimates—two key elements of a business case—resulted in projects costing more, taking longer, and achieving less than originally planned. Although NASA is continuing to refine its exploration architecture cost estimates, the agency cannot at this time provide a firm estimate of what it will take to implement the architecture. The absence of firm cost estimates is mainly due to the fact that the program is in the early stages of its life cycle. NASA conducted a cost risk analysis of its preliminary estimates through fiscal year 2011. On the basis of this analysis and through the addition of programmatic reserves (20 percent on all development and 10 percent on all production costs), NASA is 65 percent confident that the actual cost of the program will either meet or be less than its estimate of $31.2 billion through fiscal year 2011. For cost estimates beyond 2011, when most of the cost risk for implementing the architecture will be realized, NASA has not applied a confidence level distinction. Since NASA released its preliminary estimates, the agency has continued to make architecture changes and refine its estimates in an effort to establish a program that will be sustainable within projected resources. While changes to the program are appropriate at this stage when concepts are still being developed, they leave the agency in the position of being unable to firmly identify program requirements and needed resources. NASA plans to commit to a firm cost estimate for the Constellation program at the preliminary design review in 2008, when requirements, design, and schedule will all be baselined. It is at this point where we advocate program commitments should be made on the basis of the knowledge secured. NASA will be challenged to implement the ESAS recommended architecture within its projected budget, particularly in the longer-term. As we reported in July 2006, there are years when NASA has projected insufficient funding to implement the architecture with some yearly shortfalls exceeding $1 billion; while in other years the funding available exceeds needed resources. Per NASA’s approach, it plans to use almost $1 billion in appropriated funds from fiscal years 2006 and 2007 in order to address the short-term funding shortfalls. NASA, using a “go as you can afford to pay” approach, maintains that in the short-term the architecture could be implemented within the projected available budgets through fiscal year 2011 when funding is considered cumulatively. However, despite initial surpluses, the long-term sustainability of the program is questionable given the long-term funding outlook for the program. NASA’s preliminary projections show multibillion-dollar shortfalls for its Exploration Systems Mission Directorate in all fiscal years from 2014 to 2020, with an overall deficit through 2025 in excess of $18 billion. According to NASA officials, the agency will have to keep the program compelling for both Congress and potential international partners, in terms of the activities that will be conducted as part of the lunar program, in order for the program to be sustainable over the long run. NASA is attempting to address funding shortfalls within the Constellation program by redirecting funds to that program from other Exploration Systems Mission Directorate activities to provide a significant surplus in fiscal years 2006 and 2007 to cover projected shortfalls beginning in fiscal year 2009. Several Research and Technology programs and missions were discontinued, descoped, or deferred and that funding was shifted to the Constellation Program to accelerate development of the CEV and CLV. In addition, the Constellation program has requested more funds than required for its projects in several early years to cover shortfalls in later years. NASA officials stated the identified budget phasing problem could worsen given the changes that were made to the exploration architecture following issuance of the study. For example, while life cycle costs may be lower in the long run, acceleration of development for the five segment Reusable Solid Rocket Booster and J-2x engine will likely add to the near- term development costs, where the funding is already constrained. NASA has yet to provide cost estimates associated with program changes. NASA must also contend with competing budgetary demands within the agency as implementation of the exploration program continues. NASA’s estimates beyond 2010 are based upon a surplus of well over $1 billion in fiscal year 2011 due to the retirement of the space shuttle fleet in 2010. However, NASA officials said the costs for retiring the space shuttle and transitioning to the new program are not fully understood; thus, the expected surplus could be less than anticipated. This year, NASA plans to spend over 39 percent of its annual budget for space shuttle and International Space Station (ISS) operations—dollars that will continue to be obligated each year as NASA completes construction of the ISS by the end of fiscal year 2010. This does not include the resources necessary to develop ISS crew rotation or logistics servicing support capabilities for the ISS during the period between when the space shuttle program retires and the CEV makes its first mission to the ISS. While, generally, the budget for the space shuttle is scheduled to decrease as the program moves closer to retirement, a question mark remains concerning the dollars required to retire the space shuttle fleet as well as transition portions of the infrastructure and workforce to support implementation of the exploration architecture. In addition, there is support within Congress and the scientific community to restore money to the Science Mission Directorate that was transferred to the space shuttle program to ensure its viability through its planned retirement in 2010. Such a change could have an impact on future exploration funding. In July 2006, we reported that NASA’s acquisition strategy for the CEV placed the project at risk of significant cost overruns, schedule delays, and performance shortfalls because it committed the government to a long- term contract before establishing a sound business case. We found that the CEV contract, as structured, committed the government to pay for design, development, production and sustainment upon contract award—with a period of performance through at least 2014 with the possibility of extending through 2019. Our report highlighted that NASA had yet to develop key elements of a sound business case, including well-defined requirements, mature technology, a preliminary design, and firm cost estimates that would support such a long-term commitment. Without such knowledge, NASA cannot predict with any confidence how much the program will cost, what technologies will or will not be available to meet performance expectations, and when the vehicle will be ready for use. NASA has acknowledged that it will not have these elements in place until the project’s preliminary design review scheduled for fiscal year 2008. As a result, we recommended that the NASA Administrator modify the current CEV acquisition strategy to ensure that the agency does not commit itself, and in turn the federal government, to a long-term contractual obligation prior to establishing a sound business case at the project’s preliminary design review. In response to our recommendation, NASA disagreed and stated that it had the appropriate level of knowledge to proceed with its current acquisition strategy. NASA also indicated that knowledge from the contractor is required in order to develop a validated set of requirements and, therefore, it was important to get the contractor on to the project as soon as possible. In addition, according to NASA officials, selection of a contractor for the CEV would enable the agency to work with the contractor to attain knowledge about the project’s required resources and, therefore, be better able to produce firm estimates of project cost. In our report, we highlighted that this is the type of information that should be obtained prior to committing to a long-term contract. To our knowledge, NASA did not explore the possibility of utilizing the contractor, through a shorter-term contract, to conduct work needed to develop valid requirements and establish higher-fidelity cost estimates—a far less risky and costly strategy. Subsequent to our report, NASA did, however, take steps to address some of the concerns we raised. Specifically, NASA modified its acquisition strategy for the CEV and made the production and sustainment schedules of the contract—known as Schedules B and C—contract options that the agency will decide whether to exercise after project’s critical design review in 2009. Therefore, NASA will only be liable for the minimum quantities under Schedules B and C when and if it chooses to exercise those options. These changes to the acquisition strategy lessen the government’s financial obligation at this early stage. Table 1 outlines the information related to the CEV acquisition strategy found in the request for proposal and changes that were made to that strategy prior to contract award. While we view these changes as in line with our recommendation and as a positive step to address some of the risks we raised in our report, NASA still has no assurance that the project will have the elements of a sound business case in place at the preliminary design review. Therefore, NASA’s commitment to efforts beyond the project’s preliminary design review—even when this commitment is limited to design, development, test and evaluation activities (DDT&E)—is a risky approach. It is at this point that NASA should (a) have the increased knowledge necessary to develop a sound business case that includes high-fidelity, engineering- based estimates of life cycle cost for the CEV project, (b) be in a better position to commit the government to a long-term effort, and (c) have more certainty in advising Congress on required resources. Sound project management and oversight will be key to addressing risks that remain for the CEV project as it proceeds with its acquisition approach. To help mitigate these risks, NASA should have in place the markers necessary to help decision makers monitor the CEV project and ensure that is following a knowledge based approach to its development. However, in our 2005 report that assessed NASA’s acquisition policies, we found that NASA’s policies lacked major decision reviews beyond the initial project approval gate and a standard set of criteria with which to measure projects at crucial phases in the development life cycle—key markers for monitoring such progress. In our review of the individual center policies, we found that the Johnson Space Center project management policy, which is the policy that the CEV project will be required to follow, also lacked such key criteria. We concluded that without such requirements in place, decision makers have little knowledge about the progress of the agency’s projects and, therefore, cannot be assured that they are making informed decisions about whether continued investment in a program or project is warranted. We recommended that NASA incorporate requirements in its new systems engineering policy to capture specific product knowledge at key junctures in project development. The demonstration of such knowledge could then be used as exit criteria for decision making at the following key junctures: Before projects are approved to transition in to implementation, we suggested that projects be required to demonstrate that key technologies have reached a high maturity level. Before projects are approved to transition from final design to fabrication, assembly, and test, we suggested that projects be required to demonstrate that the design is stable. Before projects are approved to transition to production, we suggested that projects be required to demonstrate that the design can be manufactured within cost, schedule, and quality targets. In addition, we recommended that NASA institute additional major decision reviews that are tied to these key junctures to allow decision makers to reassess the project based upon demonstrated knowledge. While NASA concurred with our recommendations, the agency has yet to take significant actions to implement them. With regard to our first recommendation, NASA stated that the agency would establish requirements for success at the key junctures mentioned above. NASA planned to include these requirements in the systems engineering policy it issued in March 2006. Unfortunately, NASA did not include these criteria as requirements in the new policy, but included them in an appendix to the policy as recommended best practices criteria. In response to our second recommendation, NASA stated it would revise its program and project management policy for flight systems and ground support projects, due to be completed in fall 2006. In the revised policy, NASA indicated that it would require the results of the critical design review and, for projects that enter a large-scale production phase, the results of the production readiness review to be reported to the appropriate decision authority in a timely manner so that a decision about whether to proceed with the project can be made. NASA has yet to issue its revised policy; therefore, it remains to be seen as to whether the CEV project decision authorities will have the opportunity to reassess and make decisions about the project using the markers recommended above after the project has initially been approved. Briefings that we have recently received indicate that NASA plans to implement our recommendation in the revised policy. The risks that NASA has accepted by moving ahead with awarding the contract for DDT&E for CEV could be mitigated by implementing our recommendations as it earlier agreed. Doing so would provide both NASA and Congress with markers of the project’s progress at key points. For example, at the preliminary design review, decision makers would be able to assess the status of the project by using the marker of technology maturity. In addition, at the critical design review, the agency could assess the status of the project using design stability (i.e., a high percentage of engineering drawings completed). If NASA has not demonstrated technology maturity at the preliminary design review or design stability at the critical design review, decision makers would have an indication that the project will likely be headed for trouble. Without such knowledge, NASA cannot be confident that its decisions about continued investments in projects are based upon the appropriate knowledge. Furthermore, NASA’s oversight committees could also use the information when debating the agency’s yearly budget and authorizing funds not only for the CEV project, but also for making choices among NASA’s many competing programs. If provided this type of information from NASA about its key projects, Congress will be in a better position to make informed decisions about how to invest the nation’s limited discretionary funds. NASA’s ability to address a number of long-standing financial management challenges could also impact management of NASA’s key projects. The lack of reliable, day-to-day information continues to threaten NASA’s ability to manage its programs, oversee its contractors, and effectively allocate its budget across numerous projects and programs. To its credit, NASA has recognized the need to enhance the capabilities and improve the functioning of its core financial management system, however, progress has been slow. NASA contract management has been on GAO’s high-risk list since 1990 because of such concerns. In conclusion, implementing the Vision over the coming decades will require hundreds of billions of dollars and a sustained commitment from multiple administrations and Congresses. The realistic identification of the resources needed to achieve the agency’s short-term goals would provide support for such a sustained commitment over the long term. With a range of federal commitments binding the fiscal future of the United States, competition for resources within the federal government will only increase over the next several decades. Consequently, it is incumbent upon NASA to ensure that it is wisely investing its existing resources. As NASA proceeds with its acquisition strategy for the CEV project and other key projects, it will be essential that the agency ensure that the investment decisions it is making are sound and based upon high levels of knowledge. NASA should require that the progress of its projects are evaluated and reevaluated using knowledge based criteria, thereby improving the quality of decisions that will be made about which program warrant further investment. Furthermore, it will be critical that NASA’s financial management organization delivers the kind of analysis and forward- looking information needed to effectively manage its programs and projects. Clear, strong executive leadership will be needed to ensure that these actions are carried out. Given the nation’s fiscal challenges and those that exist within NASA, the availability of significant additional resources is unlikely. NASA has the opportunity to establish a firm foundation for its entire exploration program by ensuring that the level of knowledge necessary to allow decision makers to make informed decisions about where continued investment is justified. Doing so will enhance confidence in the agency’s ability to finally deliver a replacement vehicle for future human space flight. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee may have. For further information regarding this testimony, please contact Allen Li at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. GAO staff who made key contributions to this testimony include Greg Campbell, Richard Cederholm, Hillary Loeffler, James L. Morrison, Jeffrey M. Niblack, and Shelby S. Oakley. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Aeronautics and Space Administration (NASA) plans to spend nearly $230 billion over the next two decades implementing the President's Vision for Space Exploration (Vision) plans. In July 2006, GAO issued a report that questioned the program's affordability, and particularly, NASA's acquisition approach for one of the program's major projects--the Crew Exploration Vehicle (CEV). This testimony, which is based upon that report and another recent GAO report evaluating NASA's acquisition policies, highlights GAO's continuing concerns with (1) the affordability of the exploration program; (2) the acquisition approach for the CEV, and; (3) NASA's acquisition policies that lack requirements for projects to proceed with adequate knowledge. NASA's proposals for implementing the space exploration Vision raise a number of concerns. NASA cannot develop a firm cost estimate for the exploration program at this time because the program is in its early stages. The changes that have occurred to the program over the past year and the resulting refinement of its cost estimates are indicative of the evolving nature of the program. While changes are appropriate at this stage of the program, they leave the agency unable to firmly identify program requirements and needed resources and, therefore, not in the position to make a long term commitment to the program. NASA will likely be challenged to implement the program, as laid out in its Exploration Systems Architecture study (ESAS), due to the high costs associated with the program in some years and its long-term sustainability relative to anticipated funding. As we reported in July 2006, there are years when NASA, with some yearly shortfalls exceeding $1 billion, does not have sufficient funding to implement the architecture; while in other years the funding available exceeds needed resources. Despite initial surpluses, the long-term sustainability of the program is questionable, given its long-term funding outlook. NASA's preliminary projections show multibillion-dollar shortfalls for its exploration directorate in all fiscal years from 2014 to 2020, with an overall deficit through 2025 in excess of $18 billion. NASA's acquisition strategy for the CEV was not based upon obtaining an adequate level of knowledge when making key resources decisions, placing the program at risk for cost overruns, schedule delays, and performance shortfalls. These risks were evident in NASA's plan to commit to a long-term product development effort before establishing a sound business case for the project that includes well-defined requirements, mature technology, a preliminary design, and firm cost estimates. NASA adjusted its acquisition approach and the agency included the production and sustainment portions of the contract as options--a move that is consistent with the recommendation in our report because it lessens the government's financial obligation at this early stage. However, risks persist with NASA's approach. As we reported in 2005, NASA's acquisition policies lacked major decision reviews beyond the initial project approval gate and lacked a standard set of criteria with which to measure projects at crucial phases in the development life cycle. These decision reviews and development measures are key markers needed to ensure that projects are proceeding with and decisions are being based upon the appropriate level of knowledge and can help to lessen identified project risks. The CEV project would benefit from the application of such markers.
As we have reported, the use of information technology (IT) to electronically collect, store, retrieve, and transfer clinical, administrative, and financial health information has great potential to help improve the quality and efficiency of health care and is critical to improving the performance of the U.S. health care system. Critical health information for a patient seeking treatment (such as allergies, current treatments or medications, and prior diagnoses) has, historically, been scattered across paper records kept by many different caregivers in many different locations, making it difficult for a clinician to access all of a patient’s health information at the time of care. Lacking access to these critical data makes it challenging for a clinician to make the most informed decisions on treatment options, potentially putting the patient’s health at greater risk. The use of electronic health records can help provide this access and improve clinical decisions. Electronic health records are particularly crucial for optimizing the health care provided to military personnel and veterans. While in military status and later as veterans, many DOD and VA patients tend to be highly mobile and may have health records residing at multiple medical facilities within and outside the United States. Making such records electronic can help ensure that complete health care information is available for most military service members and veterans at the time and place of care, no matter where it originates. Key to making health care information electronically available is the ability to share that data among health care providers—that is, interoperability. Interoperability is the ability for different information systems or components to exchange information and to use the information that has been exchanged. This capability is important because it allows patients’ electronic health information to move with them from provider to provider, regardless of where the information originated. If electronic health records conform to interoperability standards, they can be created, managed, and consulted by authorized clinicians and staff across more than one health care organization, thus providing patients and their caregivers the necessary information required for optimal care. (Paper-based health records—if available—also provide necessary information, but unlike electronic health records, paper records do not provide decision support capabilities, such as automatic alerts about a particular patient’s health, or other advantages of automation.) Interoperability can be achieved at different levels. At the highest level, data are in a format that a computer can understand and operate on, whereas at the minimum type of interoperability, the data are in a format that is viewable, so that information is available for a human being to read and interpret. Figure 1 shows various levels of interoperability and examples of the types of data that can be shared at each level. As the figure shows, paper records can be considered interoperable in that they allow data to be read and interpreted by a human being. In the remainder of this report, however, we do not discuss interoperability in this sense; instead, we focus on electronic interoperability, for which the first level of interoperability is unstructured, viewable electronic data. With unstructured data, a clinician would have to find needed or relevant information by scanning uncategorized information. The value of viewable data is increased if the data are structured so that information is categorized and easier to find. At the highest level, as shown, the computer can interpret and act on the data. Not all data require the same level of interoperability. For example, in their initial efforts to implement computable data, VA and DOD focused on outpatient pharmacy and drug allergy data because clinicians gave priority to the need for automated alerts to help medical personnel avoid administering inappropriate drugs to patients. On the other hand, for such narrative data as clinical notes, viewability may be sufficient. Achieving even a minimal level of interoperability is valuable for potentially making all relevant information available to clinicians. Any type of interoperability depends on the use of agreed-upon standards to ensure that information can be shared and used. In health IT, standards govern areas ranging from technical issues, such as file types and interchange systems, to content issues, such as medical terminology. Developing, coordinating, and agreeing on standards are only part of the processes involved in achieving interoperability for electronic health records systems or capabilities. In addition, specifications are needed for implementing the standards, as well as criteria and a process for verifying compliance with the standards. In December 2001, an effort to establish federal health information standards was initiated as an Office of Management and Budget (OMB) e- government project to enable federal agencies to build interoperable health data systems. This project, the Consolidated Health Informatics initiative, was a collaborative agreement among federal agencies, including DOD and VA, to adopt a common set of health information standards for the electronic exchange of clinical health information. Under the Consolidated Health Informatics initiative, DOD, VA, and other participating agencies agreed to endorse 20 sets of standards to make it easier for information to be shared across agencies and to serve as a model for the private sector. For example, standard medication terminologies were agreed upon, which DOD and VA then began to adopt in developing their data repositories. Recognizing the need for public and private sector collaboration to achieve a national interoperable health IT infrastructure, the President issued an executive order in April 2004 that called for widespread adoption of interoperable electronic health records by 2014. This order established the Office of the National Coordinator for Health Information Technology within the Department of Health and Human Services (HHS) with responsibility, among other things, for developing, maintaining, and directing the implementation of a strategic plan to guide the nationwide implementation of interoperable health IT in both the public and private health care sectors. Among its responsibilities as the chief advisor to the Secretary of HHS in this area, the Office of the National Coordinator is to report progress on the implementation of this strategic plan. Under the direction of HHS (through the Office of the National Coordinator), three primary organizations were designated to play major roles in expanding the implementation of health IT: the American Health Information Community, the Healthcare Information Technology Standards Panel, and the Certification Commission for Healthcare Information Technology. All three are involved in various processes related to electronic health records interoperability standards. The functions of these organizations are described in the following. The community is a federal advisory body created by the Secretary of HHS to make recommendations on how to accelerate the development and adoption of health IT, including advancing interoperability, identifying health IT standards, advancing nationwide health information exchange, and protecting personal health information. Formed in September 2005, the community is made up of representatives from both the public and private sectors. The American Health Information Community determines specific health care areas of high priority and develops “use cases” for these areas, which provide the context in which standards would be applicable. For example, the community has developed a use case regarding the creation of standardized, secure records of past and current laboratory test results for access by health professionals. The use case conveys how health care professionals would use such records and what standards would apply. Developed in October 2005, the Healthcare Information Technology Standards Panel (HITSP) is a public-private partnership, sponsored by the American National Standards Institute and funded by the Office of the National Coordinator. (HITSP is the successor to the Consolidated Health Informatics initiative, which was dissolved and absorbed into the panel on September 30, 2006.) The panel was established to identify competing standards for the use cases developed by the American Health Information Community and “harmonize” the standards. (Harmonization is the process of identifying overlaps and gaps in relevant standards and developing recommendations to address these overlaps and gaps.) For example, for the three initial use cases developed by the American Health Information Community, HITSP identified competing standards by converting the use cases into detailed requirements documents; it then examined and assessed more than 700 existing standards that would meet those requirements. From those 700 standards, the panel identified 30 named standards and produced detailed implementation guidance describing the specific transactions and use of these named standards. This guidance is codified in an interoperability specification for each use case that integrates the standards. Each of the interoperability specifications developed by HITSP includes references to the identified standards or parts of standards and explains how they should be applied to specific topics. For example, among the standards referred to in one interoperability specification is the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT). This standard is to be used in the “Lab Result Terminology Component” of the specification. Once developed, the specifications are presented to the American Health Information Community, which assesses them for recommendation to the Secretary of HHS. The Secretary publicly “accepts” recommended specifications for a 1-year period of implementation testing, after which the Secretary can formally “recognize” the specifications and associated guidance as interoperability standards. This two-step process is intended to ensure that software developers have adequate time to implement recognized standards in their software. The year between acceptance and recognition allows the panel to refine its implementation guidance based on feedback from actual software implementation. Table 1 shows the current status of the interoperability specifications developed by HITSP. Each of the interoperability specifications in the table is associated with one of the seven use cases developed by the American Health Information Community in 2006 and 2007. The community is also developing six use cases for 2008, for which interoperability specifications have not yet been released: Consultation and Transfers of Care, Public Health Case Reporting, and Immunizations & Response Management. The commission is an independent, nonprofit organization that certifies health IT products. HHS entered into a contract with the commission in October 2005 to develop and evaluate the certification criteria and inspection process for electronic health records. According to HHS, certification is to be the process by which the IT systems of federal health contractors are established to meet federal interoperability standards. Certification helps assure purchasers and other users of health IT systems that the systems will provide needed capabilities (including ensuring security and confidentiality) and will work with other systems without reprogramming. Certification also encourages adoption of health IT by assuring providers that their systems can participate in nationwide health information exchange in the future. In 2006, the commission certified the first 37 ambulatory—or clinician office-based—electronic health record products as meeting baseline criteria for functionality, security, and interoperability. In 2007, the commission expanded certification to inpatient—or hospital—electronic health record products, which could significantly increase patients’ and providers’ access to the health information generated during a hospitalization. To date, the commission has certified over 100 electronic health record products. Since 2005, we have reported and testified on the various actions that HHS and the Office of the National Coordinator have taken to advance nationwide implementation of health IT, which include the establishment of the American Health Information Community and related activities, selection of initial standards to address specific health areas, and the release in July 2004 of a framework for strategic action. We pointed out in 2005 that this framework did not constitute a comprehensive national strategy with detailed plans, milestones, and performance measures needed to ensure that the outcomes of the department’s various initiatives are integrated and its goals are met. As a result, we recommend that HHS establish detailed plans and milestones for each phase of the framework for strategic action and take steps to ensure that those plans are followed and milestones met. In this regard, in June 2008, the Office of the National Coordinator released a four-year strategic plan. Although we have not yet fully assessed this plan, if its milestones and measures for achieving an interoperable national infrastructure for health IT are appropriate, the plan could provide a useful roadmap to support the goal of widespread adoption of interoperable electronic health records. DOD and VA have been working to exchange patient health data electronically since 1998. However, the departments have faced considerable challenges in project planning and management, leading to repeated changes in the focus of their initiatives and target completion dates. In reviews in 2001 and 2002, we noted management weaknesses, such as inadequate accountability and poor planning and oversight, and recommended that the departments apply principles of sound project management. In response, by July 2002, DOD and VA had revised their strategy to pursue two initiatives: (1) sharing information in existing systems and (2) developing modernized health information systems— replacing their existing (legacy) systems—that would be able to share data and, ultimately, use interoperable electronic health records. In their shorter-term initiatives to share information from existing systems, the departments began from different positions. VA has one integrated medical information system—the Veterans Health Information Systems and Technology Architecture (VistA)—which uses all electronic records and was developed in-house by VA clinicians and IT personnel. All VA medical facilities have access to all VistA information. In contrast, DOD uses multiple legacy medical information systems (table 1 illustrates selected systems), all of which are commercial software products that are customized for specific uses. Until recently, these systems could not share information. In addition, not all of DOD’s medical information is electronic: certain records are paper-based. As we have reported, the departments’ efforts to share information in their existing systems eventually included several sharing initiatives and exchange projects: The Federal Health Information Exchange (FHIE), completed in 2004, enabled DOD to electronically transfer service members’ electronic health information to VA when the members left active duty. The Laboratory Data Sharing Interface (LDSI), a project established in 2004, allows DOD and VA facilities to share laboratory resources. This interface, now deployed at nine locations, allows the departments to communicate orders for lab tests and their results electronically. The Bidirectional Health Information Exchange (BHIE), also established in 2004, was aimed at allowing clinicians at both departments viewable access to records on shared patients (that is, those who receive care from both departments—for example, veterans may receive outpatient care from VA clinicians and be hospitalized at a military treatment facility). Another benefit of the interface is that it allows DOD sites to see previously inaccessible data at other DOD sites. In the long term, each of the departments aims to develop a modernized system in the context of a common health information architecture that would allow a two-way exchange of health information. The common architecture is to include standardized, computable data; communications; security; and high-performance health information systems: DOD’s Armed Forces Health Longitudinal Technology Application (AHLTA) and VA’s HealtheVet. The departments’ modernized systems are to store information (in standardized, computable form) in separate data repositories: DOD’s Clinical Data Repository (CDR) and VA’s Health Data Repository (HDR). For the two-way exchange of health information, the two repositories are to be linked through an interface named CHDR, which the departments began to develop in March 2004; implementation of the first release of the interface (at one site) occurred in September 2006. Beyond these initiatives, in January 2007, the departments announced a further change to their information-sharing strategy: their intention to jointly determine an approach for inpatient health records. On July 31, 2007, they awarded a contract for a feasibility study and exploration of alternatives. According to the departments, one of the options would be adopting a joint solution, which would be expected to facilitate the seamless transition of active-duty service members to veteran status, and make inpatient health care data on shared patients more readily accessible to both DOD and VA. In addition, the departments believe that a joint development effort could enable them to realize cost savings; however, no decision on a joint system has yet been made. According to the departments, they expect to receive recommendations on possible approaches at the end of July 2008. In our previous work (see Related GAO Products), we pointed out that in view of the many tasks and challenges associated with the departments’ long-term goal of seamless sharing of health information, it was essential that the departments develop a comprehensive project plan to guide these efforts to completion. Accordingly, in 2004, we recommended that the departments develop such a plan for the CHDR interface and that it include a work breakdown structure and schedule for all development, testing, and implementation tasks. Subsequently, the departments began work on the short-term initiatives described, and we raised concerns regarding how all these initiatives were to be incorporated into an overall strategy toward achieving the departments’ goal of a comprehensive, seamless exchange of health information. In response to our concerns, the departments began to develop such a comprehensive plan, which they called the DOD/VA Information Interoperability Plan. To provide input to the plan and determine priorities, in December 2007, the departments established the Joint Clinical Information Board, made up of senior clinical leaders from both departments. The board is responsible for establishing clinical priorities for electronic data sharing between the departments, determining essential health information to be shared, and further identifying and prioritizing data that should be viewable and data that should be computable. The departments produced a draft DOD/VA Information Interoperability Plan in March 2008. According to DOD and VA officials, the draft defines the technical and managerial processes necessary to satisfy the departments’ requirements and guide their activities to completion. According to these officials, review of this draft by senior DOD and VA officials is currently ongoing and is scheduled to be completed by August 2008. DOD and VA have established and implemented mechanisms for electronic sharing of health information, some of which is exchanged in computable form, while other information is viewable only. However, not all electronic health information is yet shared (for example, immunization records and history, data on exposure to health hazards, and psychological health treatment and care records). Further, although VA’s health information is all captured electronically, not all health data collected by DOD are electronic—many DOD medical facilities use paper-based health records. Computable data. Data in computable form are exchanged through the CHDR interface, which links the modernized health data repositories for the new systems that each department is developing. Implementing the interface required the departments to agree on standards for various types of data, put the data into the agreed standard formats, and populate the repositories with the standardized data. Currently, the types of computable health data being exchanged are limited to outpatient pharmacy and drug allergy data. The next type of data to be standardized, included in the repositories, and exchanged is laboratory data. These data are not shared for all patients—only those who are seen at both DOD and VA medical facilities, identified as shared patients, and then “activated.” Once a patient is activated, all DOD and VA sites can access information on that patient and receive alerts on allergies and drug interactions for that patient. According to DOD and VA officials, outpatient pharmacy and drug allergy data were being exchanged on more than 18,300 shared patients as of June 2008; however, officials stated that they are unable to track the number of shared patients currently receiving care from both departments, so the number of patients for whom data could potentially be shared is unknown. Viewable data. Data in viewable form are shared through the BHIE interface. Through BHIE, clinicians can query selected health information on patients from all VA and DOD sites and view current data onscreen almost immediately. Because the BHIE interface provides access to up-to- date information, the departments’ clinicians expressed strong interest in expanding its use. As a result, the departments decided to broaden the capability and expand its implementation. For example, the departments completed a BHIE interface with DOD’s Clinical Data Repository in July 2007, and they began sharing viewable patient vital signs information through BHIE in June 2008. Extending BHIE connectivity could provide both departments with the ability to view additional data in DOD’s legacy systems, until such time as the departments’ modernized systems are fully developed and implemented. According to a DOD/VA annual report and program officials, the departments now consider BHIE an interim step in their overall strategy to create a two-way exchange of electronic health records. Table 1 summarizes the types of health data currently shared via the departments’ various initiatives (including FHIE and LDSI), as well as additional types of data that are currently planned for sharing. As depicted in table 3, DOD and VA are sharing or plan to share a wide range of health information; however, other health information that the departments currently capture has not yet been addressed (for example, immunization records and history and data on exposure to health hazards). Further, although VA’s health information is all captured electronically, many DOD medical facilities continue to rely on paper records. Also, clinical encounters for those enrolled in the military’s TRICARE health care program are not captured in DOD’s electronic health system unless care is received at a military treatment facility. According to the departments’ officials, the DOD/VA Information Interoperability Plan (targeted for approval in August 2008) is to address these and other issues and define tasks required to guide the development and implementation of interoperable, bidirectional, and standards-based electronic health records and capabilities for military and veteran beneficiaries. DOD and VA are in the process of finalizing the plan, and we have not yet performed an assessment. However, if it includes the essential elements needed to guide the departments in achieving their long-term goal of seamless sharing of health information, it could improve the prospects for the successful achievement of this goal. DOD and VA have agreed upon numerous common standards that allow them to share health data, which include standards that are part of current and emerging federal interoperability specifications. The foundation built by this collaborative process has allowed DOD and VA to begin sharing computable health data (the highest level of interoperability). Continuing their historical involvement in efforts to agree upon standards for the electronic exchange of clinical health information, the departments are also participating in recent ongoing standards-related initiatives led by the Office of the National Coordinator for Health Information Technology (within the Department of Health and Human Services). In addition, DOD is taking steps to arrange for certification of its modernized health information system (a customized commercial system) against current standards. The standards agreed to by the two departments are listed in a jointly published common set of interoperability standards called the Target DOD/VA Health Standards Profile. This profile resulted from an effort that took place beginning in 2001, in which the two departments compared their individual standards profiles for compatibility and began converging them. First developed in 2004, the Target Standards Profile is updated annually and is used for reviewing joint DOD/VA initiatives to ensure standards compliance. According to the departments, they anticipate continued updates and revisions to the Target Standards Profile as additional federal standards emerge and are in varying stages of recognition and acceptance by HHS (as previously presented in table 1). The current version of the profile, dated September 2007, includes federal standards (such as data standards established by the Food and Drug Administration and security standards established by the National Institute of Standards and Technology); industry standards (such as wireless communications standards established by the Institute of Electrical and Electronics Engineers and Web file sharing standards established by the American National Standards Institute); and international standards (such as SNOMED CT, which was mentioned earlier, and security standards established by the International Organization for Standardization). The profile also indicates which of these standards support the HHS- recognized use cases and HITSP interoperability specifications. For example, for clinical data on allergy reactions, the departments agreed to use SNOMED CT codes (mentioned previously), which are included as part of HITSP interoperability specifications. In particular, for the two kinds of data now being exchanged in computable form through CHDR (pharmacy and allergy data), DOD and VA adopted National Library of Medicine data standards for medications and drug allergies, as well as SNOMED CT codes for allergy reactions. According to officials, the departments rely on published versions of the library standards, and they can also submit new terms to the National Library of Medicine for inclusion in the standards. Also, the departments can exchange a standardized allergy reaction as long as it is mapped to a SNOMED CT code in each department’s allergy reaction file. If a coded term is not available in the files, clinicians can exchange descriptions of allergy reactions in free text. This standardization was a prerequisite for exchanging computable medical information—an accomplishment that, according to the National Coordinator for Health IT, has not yet been achieved in any other sector. Continuing the departments’ historical involvement in efforts to agree upon standards for the electronic exchange of clinical health information, health officials from both DOD and VA participate as members of the American Health Information Community and HITSP. For example, the 18- member community includes high-level representatives from both DOD (the Assistant Secretary of Defense for Health Affairs) and VA (the Director, Health Data and Informatics, Veterans Health Administration). DOD and VA are members of the HITSP Board and are actively working on several committees and groups (Provider Perspective Technical Committee; Population Perspective Technical Committee; Security, Privacy and Infrastructure Domain Technical Committee; Care Management and Health Records Domain Technical Committee; Administrative and Financial Domain Technical Committee; Harmonization Committee; and Foundation Committee). The National Coordinator indicated that such participation is important and stated that it would not be advisable for DOD and VA to move significantly ahead of the national standards initiative; if they did, the departments might have to change the way their systems share information by adjusting them to the national standards later, as the standards continue to evolve. In addition, according to DOD officials, the department is taking steps to ensure that the electronic health records produced by its modernized health information system, AHLTA, which is a customized commercial software application, are compliant with standards by arranging for certification through the Certification Commission for Healthcare Information Technology. Specifically, version 3.3 of AHLTA, which is still undergoing beta testing, was conditionally certified in April 2007 against 2006 outpatient electronic health record criteria established by the commission. DOD officials stated that AHLTA version 3.3 has been installed at three DOD locations for beta testing and has met specific functionality, interoperability, and security requirements. The commission cannot fully certify this version of AHLTA until it has verified that the system has been in operational use at a medical site. The departments’ efforts to share data and to be involved in standardization activities are important mechanisms for ensuring that their electronic health records are both interoperable and aligned with emerging standards and specifications. To accelerate the departments’ ongoing interoperability efforts, Congress included provisions establishing a joint interagency program office in the National Defense Authorization Act for Fiscal Year 2008. Under the act, the Secretary of Defense and the Secretary of Veterans Affairs were required to jointly develop schedules and benchmarks for setting up the DOD/VA Interagency Program Office, as well as for other activities for achieving interoperable health information (that is, establishing system requirements, acquisition and testing, and implementation of interoperable electronic health records or capabilities). The schedules and benchmarks were due 30 days after passage of the act (February 28, 2008). The departments developed a draft implementation plan defining fiscal years 2008 and 2009 schedules and milestones; the date of the draft was April 25—almost 2 months after the due date. In the effort to set up the program office, the departments appointed an Acting Director from DOD and an Acting Deputy Director from VA on April 17, 2008. According to the Acting Director, they have also detailed staff and provided temporary space and equipment to a transition team. According to this official, through the efforts of the transition team, the departments are currently developing a charter for the office, defining and approving an organizational structure, and preparing to begin recruiting permanent staff for the office, who are expected to number about 30. According to the implementation plan, the departments expect to appoint a permanent Director and Deputy and begin recruiting staff by October 2008. According to the Acting Director, program staff are expected to be in place, and the office is expected to be fully operational by December 2008. According to the departments, $4.94 million was requested to fund the office for fiscal year 2008, which is expected to be received this July. Funding requirements of $6.94 million for fiscal year 2009 were submitted in June. The draft implementation plan includes schedules and milestones for achieving interoperable health information in two stages. The first stage— Interoperability I, to be completed by September 2008—is to address those health data most commonly required by health care providers, as validated by the Joint Clinical Information Board. The first milestone for Interoperability I, sharing vital signs information, has been achieved. The remaining milestones (sharing questionnaires and forms, family history, social history, and other history) are all due September 2008. The second stage—Interoperability II, to be completed by September 2009—is to address additional health information enhancements. However, the information to be covered by these enhancements has not yet been fully defined, and milestone dates are not fully established: According to the plan, the requirements for the Interoperability II enhancements are to be validated by the Joint Clinical Information Board, which sets the clinical priorities for what electronic health information should be shared next. This validation, followed by approval by senior department leadership, was to be complete by June 2008. However, according to department officials, the completion date is now expected to be the end of July 2008. Of 52 milestone dates for Interoperability II, 19 are yet to be determined. For example, milestone dates have not been identified regarding capabilities to share data on exposures to health hazards, immunization records and history, family history, and psychological health treatment and care records. Officials stated that decisions on these milestone dates will depend on clinical priorities, technical considerations, and policy decisions. For example, the exchange of psychological health treatment and care records requires policy decisions regarding appropriate access. Further, according to the implementation plan draft, the plan is intended to serve as a “living document” that will be updated and refined as more detailed information becomes known on planned fiscal year 2008 and fiscal year 2009 initiatives, and as health care information needs change. According to the Acting Director, the draft implementation plan has not been finalized because of remaining uncertainties regarding such issues as space and staffing needs. For example, although the scope of the office’s responsibility is to be for electronic health records and capabilities, the departments’ leadership may broaden its scope to include sharing of personnel and benefits data, which would affect the number of staff required. However, although the implementation plan (as a planning tool) is appropriately a living document, it is nonetheless important to complete the planning and make the decisions needed to finalize the initial plan, particularly in view of the fast approaching September 2009 deadline. Further, according to department officials, the joint interagency program office will play a crucial role in coordinating the departments’ efforts to accelerate their interoperability efforts. An important aspect of this coordination will be managing the further development and implementation of the DOD/VA Information Interoperability Plan, currently under review by senior management. According to these officials, having a centralized office to take on this role will be a primary benefit. However, the effort to set up the program office is still in its early stages. The positions of Director and Deputy Director are not yet permanently filled, permanent staff have not yet been hired, and facilities have not yet been designated for housing the office. Until the program office is fully established, it will not be able to play this crucial role effectively. DOD and VA are currently sharing more health information than ever before, including exchanging some at the highest level of interoperability, that is, in computable form. The departments also have efforts under way to share additional information. Additional issues remaining to be addressed include electronic sharing of the information in paper-based health records and the completion of their long-range plans to develop fully interoperable health information systems. According to the departments, the DOD/VA Information Interoperability Plan is to address these and other issues. Once the plan is finalized and approved by DOD and VA officials, we intend to perform an assessment of the plan. However, if the plan includes the essential elements needed to guide the departments in achieving their long-term goal of seamless sharing of health information, it could improve the prospects for the successful achievement of this goal. Further enhancing interoperability depends on adherence to common standards. The two departments have agreed on standards and are working with each other and federal groups to help ensure that their systems are both interoperable and compliant with current and emerging federal standards. The joint interagency program office is to play a crucial role in accelerating the departments’ efforts to achieve electronic health records and capabilities that allow for full interoperability. However, it is not expected to be fully set up until the end of the year, after which only 9 months will remain to meet the goal of full interoperability between the departments by September 2009. The implementation plan, which was almost 2 months late, remains in draft, with many milestone dates yet to be determined. In view of the short timeframes, without a fully established program office and a finalized implementation plan with set milestones, the departments may be challenged in meeting the required date for achieving interoperable electronic health records and capabilities. To better ensure that the effort by DOD and VA to achieve fully interoperable electronic health record systems or capabilities is accelerated, we recommend that the Secretaries of Defense and Veterans Affairs give priority to fully establishing the Joint Interagency Program Office by expediting efforts to put in place permanent leadership, staff, and facilities and make the necessary decisions to finalize the draft implementation plan. In providing written comments on a draft of this report, the Assistant Secretary of Defense for Health Affairs and the Secretary of Veterans Affairs agreed with our recommendations. (The departments’ comments are reproduced in app.II and app. III, respectively.) DOD stated that high priority will be given to fully establishing the Joint Interagency Program Office, with specific focus on expanding efforts related to permanent leadership, staff, and facilities. DOD also provided technical comments on the draft report, which we incorporated as appropriate. VA’s comments described actions planned or being taken that respond to our recommendations. For example, according to VA, the Deputy Director of the Interagency Program Office is expected to be appointed by October 2008. In addition, VA stated that the departments collaboratively determined the number and type of staff required for the new office and expect to hire permanent staff by December 2008. In this regard, DOD has taken the lead on securing permanent facilities for the program office and is currently working with the General Services Administration to find suitable space. In addition, VA stated the departments are in the process of finalizing the implementation plan and that by October 31, 2008, they expect to identify the milestones and timelines for defining requirements to support interoperable health records. The department noted that the Joint Clinical Information Board is expected to identify, by July 31, 2008, the specific data types and format for sharing that must be achieved by September 2009. If the actions planned or currently under way are properly implemented, they should help ensure that DOD and VA will be successful in meeting their goals for sharing interoperable health information. We are sending copies of this report to the Secretaries of Veterans Affairs and Defense, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To describe the progress of the Department of Defense (DOD) and Department of Veterans Affairs (VA) to date on developing electronic health records systems or capabilities that allow for full interoperability of personal health care information between the departments, we reviewed our previous work on DOD and VA efforts to develop health information systems, interoperable health records, and interoperability standards to be implemented in federal health care programs. Additionally, we reviewed information gathered from agency documentation and interviews with cognizant DOD and VA officials relating to the departments’ efforts to share electronic health information. Further, we visited a DOD military treatment facility and a VA medical center (Walter Reed Army Medical Center and the Washington, D.C., VA Medical Center), chosen because they were accessible and allowed us to observe the sharing capabilities and functionality of the two departments’ electronic health information systems. To describe steps taken by the departments to ensure that their health records comply with applicable interoperability standards, implementation specifications, and certification criteria of the federal government, we analyzed information gathered from DOD and VA documentation and interviews pertaining to the interoperability standards and implementation specifications that the two departments have agreed to for exchanging health information via their health care information systems. We reviewed documentation and interviewed agency officials from the Department of Health and Human Services’ Office of the National Coordinator for Health Information technology to obtain information regarding the defined federal interoperability standards, implementation specifications, and certification criteria. We also reviewed documentation and interviewed DOD and VA officials from the Joint Clinical Information Board to determine the extent to which the departments have adopted federal interoperability standards, implementation specifications, and certification criteria. To describe efforts to set up the joint interagency program office, we analyzed documentation regarding the establishment of the office and interviewed responsible officials. We conducted this performance audit at VA and DOD sites in the greater Washington, D.C., metropolitan area from March 2008 through July 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, key contributions were made to this report by Barbara S. Oliver (Assistant Director), Barbara Collier, Kelly Shaw, and Robert Williams, Jr. VA and DOD Health Care: Progress Made on Implementation of 2003 President’s Task Force Recommendations on Collaboration and Coordination, but More Remains to Be Done. GAO-08-495R. Washington, D.C.: April 30, 2008. Health Information Technology: HHS Is Pursuing Efforts to Advance Nationwide Implementation, but Has Not Yet Completed a National Strategy. GAO-08-499T. Washington, D.C.: February 14, 2008. Information Technology: VA and DOD Continue to Expand Sharing of Medical Information, but Still Lack Comprehensive Electronic Medical Records. GAO-08-207T. Washington, D.C.: October 24, 2007. Veterans Affairs: Progress Made in Centralizing Information Technology Management, but Challenges Persist. GAO-07-1246T. Washington, D.C.: September 19, 2007. Information Technology: VA and DOD Are Making Progress in Sharing Medical Information, but Remain Far from Having Comprehensive Electronic Medical Records. GAO-07-1108T. Washington, D.C.: July 18, 2007. Health Information Technology: Efforts Continue but Comprehensive Privacy Approach Needed for National Strategy. GAO-07-988T. Washington, D.C.: June 19, 2007. Information Technology: VA and DOD Are Making Progress in Sharing Medical Information, but Are Far from Comprehensive Electronic Medical Records. GAO-07-852T. Washington, D.C.: May 8, 2007. DOD and VA Outpatient Pharmacy Data: Computable Data Are Exchanged for Some Shared Patients, but Additional Steps Could Facilitate Exchanging These Data for All Shared Patients. GAO-07- 554R.Washington, D.C.: April 30, 2007. Health Information Technology: Early Efforts Initiated but Comprehensive Privacy Approach Needed for National Strategy. GAO-07- 400T. Washington, D.C.: February 1, 2007. Health Information Technology: Early Efforts Initiated, but Comprehensive Privacy Approach Needed for National Strategy. GAO-07- 238. Washington, D.C.: January 10, 2007. Health Information Technology: HHS is Continuing Efforts to Define Its National Strategy. GAO-06-1071T. Washington, D.C.: September 1, 2006. Information Technology: VA and DOD Face Challenges in Completing Key Efforts. GAO-06-905T. Washington, D.C.: June 22, 2006. Health Information Technology: HHS Is Continuing Efforts to Define a National Strategy. GAO-06-346T. Washington, D.C.: March 15, 2006. Computer-Based Patient Records: VA and DOD Made Progress, but Much Work Remains to Fully Share Medical Information. GAO-05-1051T. Washington, D.C.: September 28, 2005. Health Information Technology: HHS Is Taking Steps to Develop a National Strategy. GAO-05-628. Washington, D.C.: May 27, 2005. Computer-Based Patient Records: VA and DOD Efforts to Exchange Health Data Could Benefit from Improved Planning and Project Management. GAO-04-687. Washington, D.C.: June 7, 2004. Computer-Based Patient Records: Improved Planning and Project Management Are Critical to Achieving Two-Way VA-DOD Health Data Exchange. GAO-04-811T. Washington, D.C.: May 19, 2004. Computer-Based Patient Records: Sound Planning and Project Management Are Needed to Achieve a Two-Way Exchange of VA and DOD Health Data. GAO-04-402T. Washington, D.C.: March 17, 2004. Computer-Based Patient Records: Short-Term Progress Made, but Much Work Remains to Achieve a Two-Way Data Exchange Between VA and DOD Health Systems. GAO-04-271T. Washington, D.C.: November 19, 2003. VA Information Technology: Management Making Important Progress in Addressing Key Challenges. GAO-02-1054T. Washington, D.C.: September 26, 2002. Veterans Affairs: Sustained Management Attention Is Key to Achieving Information Technology Results. GAO-02-703. Washington, D.C.: June 12, 2002. VA Information Technology: Progress Made, but Continued Management Attention Is Key to Achieving Results. GAO-02-369T. Washington, D.C.: March 13, 2002. VA and Defense Health Care: Military Medical Surveillance Policies in Place, but Implementation Challenges Remain. GAO-02-478T. Washington, D.C.: February 27, 2002. VA and Defense Health Care: Progress Made, but DOD Continues to Face Military Medical Surveillance System Challenges. GAO-02-377T. Washington, D.C.: January 24, 2002. VA and Defense Health Care: Progress and Challenges DOD Faces in Executing a Military Medical Surveillance System. GAO-02-173T. Washington, D.C.: October 16, 2001. Computer-Based Patient Records: Better Planning and Oversight by VA, DOD, and IHS Would Enhance Health Data Sharing. GAO-01-459. Washington, D.C.: April 30, 2001.
Under the National Defense Authorization Act for Fiscal Year 2008, the Department of Defense (DOD) and the Department of Veterans Affairs (VA) are required to accelerate the exchange of health information between the departments and to develop systems or capabilities that allow for full interoperability (generally, the ability of systems to use data that are exchanged) and that are compliant with federal standards. The act also established a joint interagency program office to act as a single point of accountability for the effort, whose function is to implement such systems or capabilities by September 30, 2009. Further, the act required that GAO semi-annually report on the progress made in achieving these goals. For this first report, GAO describes the departments' progress to date in sharing electronic health information, developing electronic health records that comply with federal standards, and setting up the joint interagency program office. To do so, GAO reviewed its past work, analyzed agency documentation, and conducted interviews with agency officials. DOD and VA are sharing some, but not all, electronic health information at different levels of interoperability. Specifically, pharmacy and drug allergy data on about 18,300 patients who receive care from both departments are exchanged at the highest level of interoperability--that is, in computable form; at this level, the data are in a standardized format that a computer application can act on (for example, to provide alerts to clinicians of drug allergies). In other cases, data can be viewed only--a lower level of interoperability that still provides clinicians with important information. However, not all electronic health information is yet shared, and information is still captured on paper at many DOD medical facilities. According to the departments, a DOD/VA Information Interoperability Plan (targeted for approval in August 2008) is to address these and other issues and define tasks required to guide the development and implementation of an interoperable electronic health record capability. If properly developed and implemented, the plan could help the departments achieve the goal of seamless sharing of health information. DOD and VA have agreed upon numerous common standards that allow them to share health data, which include standards that are part of current and emerging federal interoperability specifications. This collaboration provided the essential foundation for the departments to begin sharing computable health data. The departments are currently participating in recent initiatives led by the Office of the National Coordinator for Health Information Technology (within the Department of Health and Human Services) that are aimed at promoting the adoption of federal standards and broader use of electronic health records. These initiatives include identifying relevant existing standards, identifying and addressing overlaps and gaps in the standards, and developing interoperability specifications and certification criteria based on these standards. The involvement of the departments in these activities is an important mechanism for aligning their electronic health records with emerging federal standards. In establishing the joint interagency program office, Congress directed the departments to develop an implementation plan for setting up the office and carrying out related activities (such as validating and establishing requirements for interoperable health capabilities). The departments' effort to set up the program office is still in its early stages. Leadership positions in the office are not yet permanently filled, staffing is not complete, and facilities to house the office have not been designated. Further, the implementation plan is currently in draft, and although it includes schedules and milestones, dates for several activities have not yet been determined (such as implementing a capability to share immunization records), even though all capabilities are to be achieved by September 2009. Without a fully established program office and a finalized implementation plan with set milestones, the departments may be challenged in meeting the required date for achieving interoperable electronic health records and capabilities.
From the passage of the Social Security Act in 1935 until the welfare reform law of 1996, the immigration status of those lawfully admitted for permanent U.S. residence did not preclude these individuals from eligibility for welfare benefits. Welfare reform changed this by substantially restricting pre-reform and new immigrants’ access to federal means-tested benefits. Table 1 details the program eligibility changes for immigrants under the major federal welfare programs. As a result of these changes, pre-reform immigrants remain eligible for some benefits. New immigrants are ineligible for federal benefits during their first 5 years of U.S. residency, until they become naturalized citizens, or unless they have an immigration status excepted from the restrictions. The welfare reform law allows states to decide whether pre-reform immigrants retain eligibility for federal TANF and Medicaid and whether new immigrants can apply for these programs after a mandatory 5-year bar. As originally passed, the welfare reform law generally eliminated immigrants’ eligibility for SSI and food stamps. The Balanced Budget Act of 1997 reinstated SSI eligibility for pre-reform immigrants already receiving benefits and allowed pre-reform immigrants who are or become blind or disabled to apply for benefits in the future. New immigrants, however, generally cannot receive SSI and food stamp benefits unless they meet certain exceptions or become citizens. These exceptions appear in table 1, which shows that the exception of allowing benefits to those who can be credited with 40 work quarters only applies to new immigrants with 5 years of U.S. residency. The welfare reform law also specifies federal programs from which an immigrant cannot be barred. The recent legislative change has restored food stamp eligibility, effective November 1, 1998, to pre-reform immigrants receiving benefits or assistance for blindness or disability, younger than 18, or aged 65 and older as of August 22, 1996. The law also restores eligibility to certain Hmong or Highland Laotian tribe entrants lawfully residing in the United States, regardless of their date of entry, and extends the eligibility period for refugees and asylees from 5 to 7 years after entering the country. In addition to restricting immigrants’ eligibility for welfare benefits, the 1996 welfare reform law revised requirements for those sponsoring immigrants’ entry into the United States. Under welfare reform, an immigrant sponsored by a relative must have the sponsor sign an affidavit of support promising to provide financial assistance if needed. In addition, to better ensure that sponsors will be financially able to help the immigrants they have sponsored, the new law requires that sponsors have incomes equal to at least 125 percent of the federal poverty level for the number of people that they will support, including themselves, their dependents, and the sponsored immigrant and accompanying family members. Moreover, to address concerns about the enforceability of affidavits of support executed before welfare reform, the new law specifies that each affidavit must be executed as a legally binding contract enforceable against the sponsor by the immigrant, the U.S. government, or any state or locality that provides any means-tested public benefit. The affidavit is enforceable until the sponsored immigrant naturalizes, is credited with 40 work quarters, permanently leaves the country, or dies. In addition to requiring legally enforceable affidavits, the law extends a sponsor’s responsibility to support immigrants by lengthening the time a sponsor’s income is attributable to a new immigrant if the immigrant applies for welfare benefits. Some federal programs previously mandated this attribution, called deeming; however, the sponsor’s income was generally included for only the first 3 or 5 years of an immigrant’s residency. The law now requires states to deem a sponsor’s income in federal means-tested programs until the immigrant becomes a citizen or can be credited with 40 work quarters. The welfare reform law also gives states the option of adding deeming requirements to state and local means-tested programs. The new support and deeming requirements are intended to ensure that immigrants rely on their sponsors rather than public benefits for aid, that the sponsors have the financial capacity to provide aid, and that sponsors are held accountable for helping immigrants they have agreed to support. This way, unless a sponsor suffers a financial setback, an immigrant should be less likely to need or receive public benefits. In addition, the welfare reform law requires states to implement new procedures to verify an alien’s status when determining eligibility for federal public benefits. The states have 2 years after the Immigration and Naturalization Service (INS) issues final regulations to ensure that their verification procedures comply with the regulations. The procedures include verifying individuals’ status as citizens or aliens, which information the states use in determining individuals’ eligibility for federal public welfare benefits, including grants, contracts, or loans provided by a federal agency or appropriated U.S. funds. INS responds to inquiries by federal, state, and local government agencies seeking to verify or determine citizenship or immigration status. Almost all states decided to continue providing TANF and Medicaid benefits for pre-reform immigrants and to provide these benefits to new immigrants after 5 years of U.S. residency. Fewer states offer assistance comparable with TANF and Medicaid to new immigrants during the mandatory 5-year federal bar. Some of these state programs, however, limit benefits to certain categories of immigrants or impose certain requirements such as living in the state for 12 months before applying for benefits. States have the option of continuing TANF and Medicaid benefits to pre-reform immigrants and providing these benefits to new immigrants after 5 years of U.S. residency. Almost all states and the District of Columbia are continuing TANF for both groups. Forty-nine states and the District of Columbia are continuing federal Medicaid benefits for these immigrants. Wyoming is the only state to discontinue Medicaid for all immigrants. Immigrants no longer eligible for the full scope of Medicaid benefits, however, continue to be eligible for emergency services under Medicaid. About a third of the states provide state-funded temporary assistance to needy families, medical assistance, or both to new immigrants during their 5-year bar from federal programs. Six of the 10 states where most immigrants reside provide assistance to those no longer eligible for TANF and Medicaid. California, Maryland, Massachusetts, and Washington provide both state-funded cash and medical assistance, while New Jersey and Virginia provide medical assistance. Some of these state programs impose deeming requirements similar to the federal program rules and state residency requirements. In addition, some states restrict medical assistance to immigrant children, pregnant women, or to those in residential care before a specific date. Maryland, for example, provides medical assistance to pregnant women and children, and Virginia provides benefits to children. In the states we visited, we observed a range of these types of benefits available to immigrants. California, where more than 35 percent of the nation’s immigrants live, provides both state-funded cash and medical assistance to new immigrants during their 5-year bar from federal benefits. New Jersey provides state-funded medical assistance to new immigrants, although it has proposed changes to state legislation to limit the scope of medical assistance benefits to emergency services only. In Washington, new immigrants may obtain state-funded cash or medical assistance after meeting a 12-month residency requirement and the state-imposed federal deeming requirements. Officials of Washington state noted that it included the residency requirement to address concerns about the state attracting immigrants from other states and becoming a welfare magnet state for immigrants. Before welfare reform, SSI provided a monthly cash benefit to needy individuals who were aged, blind, or disabled whether they were immigrants or citizens. Although welfare reform ultimately retained SSI eligibility for most pre-reform immigrants, it barred new immigrants from receiving SSI benefits until they become citizens or are categorized as excepted from the restrictions. Few states are replacing SSI benefits with new state-funded programs; however, many states have cash assistance programs available to those no longer eligible for SSI. The Social Security Administration (SSA) prepared to terminate benefits for almost 580,000 immigrants before the welfare reform law was amended to continue SSI benefits for pre-reform immigrants already on the rolls and to provide benefits in the future for those pre-reform immigrants who are or become blind or disabled. Pre-reform immigrants not already receiving SSI will no longer qualify for benefits solely on the basis of advanced age. Approximately 20,000 pre-reform noncitizens, however, do not meet the law’s definition of “qualified alien” and will therefore lose their SSI benefits in 1998 unless they adjust their immigration status to an eligible class. According to SSA, the noncitizens scheduled to lose their benefits were categorized as Permanently Residing Under the Color of Law (PRUCOL). Although few states are providing state-funded benefits to specifically replace SSI benefits, most states have general assistance programs through which some immigrants who have lost SSI and those who are no longer eligible may obtain aid. General assistance is one of the largest structured state or local programs providing assistance to the needy on an ongoing basis. According to a 1996 Urban Institute report, 41 states or localities within those states and the District of Columbia, including the 10 states where most immigrants reside, provided such programs.23, 24 Under welfare reform, however, states have the option of limiting the eligibility of immigrants for state-funded public benefits, including general assistance. The following nine states do not have state or local general assistance programs: Alabama, Arkansas, Louisiana, Mississippi, Oklahoma, South Carolina, Tennessee, West Virginia, and Wyoming. State General Assistance Programs - 1996, Urban Institute (Washington, D.C.: Oct. 1996). Information for this report was gathered before the passage of the welfare reform law. General assistance benefits are generally lower than federal cash assistance and vary by state in the populations served. In California, where counties fund and administer these programs, benefits range from $212 to $345 per month, which is considerably lower than the average monthly SSI benefit of $532 for California’s immigrants. In addition, the groups of individuals who may apply for general assistance range from all financially needy people to needy families with children and the disabled, elderly, unemployable, or a combination of these groups. In Washington, immigrants ineligible for SSI who are 18 or older and incapable of gainful employment for at least 90 days may receive assistance through the state’s General Assistance-Unemployable program; however, new immigrant children with disabilities who might have been eligible for SSI under previous law are ineligible for this program. These benefits, which average $339 per month in Washington, are less than the state’s average SSI benefit of $512 per month. On the basis of our analysis of information compiled by the National Immigration Law Center, few states have programs to specifically replace SSI benefits for new immigrants. Two states, Hawaii and Nebraska, offer state-funded benefits to disabled, blind, and elderly immigrants specifically to replace SSI benefits for which they are no longer entitled. Colorado offers cash assistance to elderly immigrants no longer eligible for SSI. With the continuation of TANF, Medicaid, and SSI benefits to pre-reform immigrants, the largest federal benefit loss for most immigrants is the termination of food stamps. At the time of our review, some states had created state-funded programs that were replacing benefits for about one-quarter of those estimated to no longer be eligible for federal food stamps nationwide. Fewer states offer such benefits to new immigrants. States’ responses to the most recent legislative change restoring eligibility to some of the pre-reform immigrants are not yet known. This group of immigrants consists mostly of children, the disabled, and the elderly— those groups who were most often targeted in the state-funded programs. Besides funding replacement food assistance programs, many states have increased funding for emergency food providers such as food banks. The states and immigrant advocacy groups contacted for our prior study,however, expressed concern that the limited emergency food assistance may be insufficient to meet the needs of immigrants who lost their eligibility for food stamps. The year following welfare reform, an estimated 940,000 of the 1.4 million immigrants receiving food stamps lost their eligibility for receiving benefits, according to the U.S. Department of Agriculture (USDA). Those no longer eligible would have otherwise received about $665 million in federal food stamps during fiscal year 1997. Almost one-fifth of those no longer eligible were immigrant children. USDA determined that most of those who remained eligible did so because they became citizens or met the exception of having 40 or more work quarters. The most recent legislation (P.L. 105-185) restores federal food stamp eligibility, effective November 1, 1998, to 250,000—mostly children, the disabled, and the elderly—of the estimated 820,000 immigrants no longer eligible for food stamps in fiscal year 1999, according to USDA. About 70 percent of the 820,000 immigrants remain ineligible for food stamps. At the time of our review, 14 states representing almost 90 percent of immigrants nationwide receiving food stamps in 1996 were replacing food stamp benefits with state-funded benefits to a portion of immigrants no longer eligible. State appropriations for these programs totaled almost $187 million for 1998. Eight states are purchasing federal food stamps, four states are issuing food stamp benefits through their electronic benefit transfer (EBT) system, and two states developed their own food voucher or cash assistance programs. Most of these programs’ benefit levels and eligibility criteria, with the exception of immigrant status, reflect the federal Food Stamp program and were implemented immediately after federal benefit terminations on September 1, 1997. According to our 1997 survey, the majority of the remaining states are not replacing or are not planning to replace the terminated food stamp benefits for legal immigrants. Table 2 provides more detailed information on these programs. Instead of setting up an entirely new state food assistance program, Washington was the first of eight states to contract with USDA to purchase federal food stamps with state funds. A provision in the Emergency Supplemental Appropriations Act of 1997 (P.L. 105-18) made it possible for the states to purchase federal food stamp coupons to provide nutrition assistance to individuals, including immigrants, made ineligible for federal food stamps. According to Washington state officials, allowing the states to purchase federal coupons saves the states the expense of creating their own voucher programs and makes the program more seamless to recipients and grocery store merchants. States are required to pay USDA the value of the benefits plus the costs of printing, shipping, and redeeming the coupons. The majority of the states replacing lost federal food stamps, however, allow eligibility only to certain immigrant categories. According to state-reported participation rates, about one-quarter of immigrants who no longer qualify for federal food stamps participate in state-funded food assistance programs. Most of these state programs target immigrants generally considered most vulnerable, such as children under age 18, the disabled, and the elderly—those aged 65 and older. California, with the largest population of immigrants, chose to provide state-funded food stamps to pre-reform immigrants younger than 18 or those aged 65 and older—about 56,000 of the estimated 151,700 immigrants whose federal benefits were terminated. The state-funded food stamp programs generally target the same groups whose eligibility for federal food stamp benefits has been restored. States’ responses to the restoring of these benefits, such as changing eligibility for state-funded programs, are unknown at this time. Like most pre-reform immigrants, new immigrants are also restricted from receiving federal food stamps. Currently, 6 of the 14 states with food stamp replacement programs—Connecticut, Florida, Maryland, Massachusetts, Minnesota, and Washington—allow eligibility to some new immigrants. Two of these states, however, limit food assistance to those living there as of 1997. At the time of our review, officials in these states could not determine the specific number of immigrants receiving state- funded benefits that were new immigrants. Although most states have no program specifically designed to replace federal food stamps for immigrants, they do provide temporary food assistance through emergency programs and local food banks or pantries. For example, the states match a level of federal funds for emergency food providers through The Emergency Food Assistance Program (TEFAP).Many states, anticipating the increased demand for food assistance by immigrants, increased funding to food banks and emergency food providers. Colorado, for example, appropriated $2 million in 1998 for a new program to provide emergency assistance, including food, to immigrants. In addition to state-funded efforts, one locality we reviewed was providing funds to local food banks. In 1997, San Francisco added $186,000 to the local food bank budget to set up three or four new food distribution sites in highly populated immigrant communities. Immigrants no longer eligible for federal food stamp benefits received notice by mail of these new distribution centers that told them to present their letters to one of the distribution sites to receive food on a weekly basis. Local officials told us that the food supply would last recipients 3 to 5 days. According to our 1997 study, some localities are working with local organizations to plan for the expected increase in the need for food assistance. Organization officials fear their resources may be insufficient to meet needs of individuals no longer eligible for food stamps. These officials do not believe their organizations can replace the long-term assistance that federal food stamps provided. Furthermore, in a study conducted by the U.S. Conference of Mayors, most surveyed cities reported that immigrants’ requests for emergency food assistance increased by an average of 11 percent in the first half of 1997. Although concerns exist about the impact of benefit restrictions for immigrants, such as the discontinuance of food stamps, no major monitoring efforts are required or planned in the states we visited or at the federal level. Moreover, a recent study for the U.S. Commission on Immigration Reform identified that the states with large immigrant populations had no comprehensive plans for monitoring the impact of welfare reform eligibility changes on immigrants. In addition, many immigrant advocacy groups we interviewed expressed concern about states’ and localities’ ability to meet immigrants’ income, food, and medical needs. Some advocacy groups noted they were conducting studies to measure the impact of federal restrictions on those affected. In addition to the federal and state programs already discussed, at least 12 states help immigrants through statewide naturalization assistance programs, according to information from the National Immigration Law Center. Helping immigrants gain citizenship offers them the ability to keep or obtain eligibility for federal benefits and reduces state spending on immigrants’ benefits. Even with state-provided assistance, the naturalization process takes time and, according to INS, the number of applications continues to increase. Naturalization assistance ranges from providing referrals to community services to classes in preparation for naturalization and financial assistance with the $95 application fee. Anticipating the restrictions for immigrants under welfare reform, New Jersey allocated $4 million for 1997 and 1998, which was matched by private funds, for its naturalization outreach program. New Jersey’s program includes English and civics classes, legal assistance with applications, and help with medical waivers for exemption from citizenship or language testing. Washington, which also began naturalization efforts before welfare reform, boosted funding for its program to $1.5 million per year for state fiscal years 1998 and 1999. Program services include helping immigrants with completing naturalization applications, paying application fees, and providing educational services. Since fiscal year 1998, the state reports an average of 1,200 individuals participating in the program each month. In addition, two of the localities we visited—Seattle and San Francisco—also established naturalization programs to assist immigrants, especially those affected by the loss of federal benefits. Though states and localities have naturalization programs, officials administering these programs expressed concern about the length of time it takes to process citizenship applications. In the three cities we visited, immigrants applying for naturalization had to wait up to 3 years before completing the process. According to INS, the average time for processing naturalization applications is more than 2 years nationwide. In some of the nation’s cities with the largest immigrant populations, the waiting time varies: it takes more than a year and a half in New York City, almost 3 years in Los Angeles, and more than 5 years in Miami. In addition, INS reported significant increases in the number of naturalization applications, from 423,000 in 1989 to more than 1.2 million in fiscal year 1996. INS officials cited the benefits that immigrants would gain from their citizenship among the reasons they expect the number of applications to remain high. The eligibility changes under welfare reform for immigrants expanded states’ administrative responsibilities and added financial responsibilities for those states choosing to provide replacement benefits. Due to these changes, the states will be revising procedures and automated systems to meet the new requirements for verifying an immigrant’s eligibility for welfare benefits. Although some states have concerns about correctly implementing these new requirements, federal agencies neither require nor plan special monitoring efforts for determining if the states are correctly determining eligibility. In addition to the challenges all states face, those providing state-funded programs face challenges obtaining future funding and managing the different eligibility rules and funding streams of both federal and state programs. Implementing the new restrictions required the states and localities to educate welfare workers and immigrant recipients about the eligibility changes and to recertify the eligibility of immigrant recipients. Program officials in the states we visited noted that completing the recertifications was time consuming. States’ more recent and future challenges include implementing the new alien status verification requirements—verifying the citizenship or immigration status of applicants for all federal public benefits, implementing the new sponsor deeming requirements, and enforcing affidavits of support for immigrants sponsored by family members. Officials in the states we visited anticipated making changes to their automated systems or encountering additional work to implement the new verification procedures or develop separate eligibility determination processes to reflect new distinctions among programs. With the new restrictions, states need more information on alien status for making eligibility determinations. Until INS issues the final regulations, the states can follow the interim INS verification guidelines. States will have 2 years after final regulations are issued to ensure that their verification systems comply with the regulations. According to INS, either proposed or interim regulations will most likely be issued in July 1998. States will face the challenge of modifying their procedures and automated systems for determining citizenship or alien status before making eligibility determinations for federal programs. According to the American Public Welfare Association, the states must modify their software programs to address the differing eligibility criteria under welfare reform. In addition, several officials in the states we reviewed reported that it takes additional steps and time for caseworkers to verify the alien status of immigrants applying for benefits and to determine or recertify their eligibility for federal programs. Officials often noted the potential for confusion in making accurate eligibility decisions, prompting concerns about providing benefits to those eligible and denying benefits to those who no longer qualify. Although concerns exist about correctly implementing welfare restrictions for immigrants, federal agencies neither require nor plan special monitoring efforts for determining if the states are correctly determining immigrants’ eligibility for benefits. At the time of our review, federal officials for the Medicaid, SSI, and Food Stamp programs told us that errors in providing benefits to ineligible immigrants could be detected in their quality control reviews. HHS officials commented that TANF program rules require no quality control reviews, and the only method they would have for monitoring immigrant restrictions, such as the length of time an individual receives TANF benefits, is through TANF’s annual single state audit. USDA officials reported that several states did not implement the new food stamp restrictions for immigrants by the required time. USDA billed one state for the amount of federal food stamp benefits provided to immigrants after the restrictions were to have been implemented. By January 1998, USDA officials indicated that as far as they knew all states had fully implemented the food stamp restrictions for immigrants. Issues that the states will face in the future include implementing the new deeming requirements and enforcing the affidavits of support. At the time of our review, the states we visited were waiting for federal or state guidance on implementing these requirements and were uncertain about how they would enforce the new affidavits of support. Welfare reform allows federal, state, and local agencies to seek reimbursement for benefits provided to sponsored immigrants; however, some officials expressed concern about the possible difficulties of locating sponsors who may have moved without reporting a change of address to the INS. The new affidavits of support have been in use since December 19, 1997, for new immigrants and for those whose alien status is changing on or after that date as, for example, from temporary residency to lawfully admitted for permanent residence. As a result of the welfare reform law, states faced major decisions on whether to provide assistance to immigrants no longer entitled to federal benefits. States that chose to provide state-funded assistance to immigrants face some long-term challenges funding and implementing these programs. Officials in the states we reviewed cautioned us that future funding for new state programs is uncertain. Although currently approved, funding for programs was appropriated for only a limited time—ranging from 1 to 2 years in the states we reviewed—and passed during favorable economic times. In New Jersey, for example, the state-funded food stamp program was funded through June 30, 1998, and the state needs to pass legislation to continue the program. California officials reported that although funding for state-provided medical assistance, food stamps, and TANF is not a pressing issue now, future funding is somewhat uncertain. They said the continuation of these state-funded programs depends on the state’s economy and on legislative decisions. The states we reviewed reported determining and tracking the fiscal claims for state and federal funds in parallel programs as an implementation challenge. Implementing state-funded food stamp programs, for example, requires states to track and report to USDA the separate federal and state food stamp issuances. In addition, some state officials reported that determining eligibility and calculating separate federal and state benefit amounts for “mixed” households—those with members who are citizens and immigrants—is challenging. A mixed household could have a new immigrant mother and a citizen child who are receiving food stamps and cash and medical assistance funded separately by federal and state dollars. Washington state officials noted that to some extent they can calculate separate benefit amounts and funding sources because their new computer system is designed to track this information. California officials reported they would have to reprogram their automated systems to identify and track costs of benefits provided to immigrants through federal and state programs. California counties manually tracked immigrants receiving benefits under certain programs until the programming changes were completed. The welfare reform law represents a significant shift of responsibility for decisions about aiding needy immigrants from the federal government to the states. Federal policy now gives the states much latitude in restricting immigrants’ eligibility for welfare programs. States’ welfare policies vary in their treatment of both pre-reform and new immigrants, according to our review. For many immigrants, the extent of assistance provided will depend on state policies and other assistance available at the local level. For those federal benefits that the states could choose to continue, almost all states did so. For those federal benefits that were terminated, many states chose to provide state-financed benefits for at least some part of this population. Few states, however, completely replaced lost federal benefits for either pre-reform or new immigrants. Some local programs, including food banks, already report an increased need for food assistance due to the welfare reform restrictions for immigrants. Our work reviewed the significant changes prompted by welfare reform in its early stages—changes affecting immigrants, including both those immigrating before and after the passage of the law and those considering future immigration. The states are focusing their welfare assistance efforts on immigrants living in the United States before welfare reform and have not yet focused much attention on the possible needs of new immigrants. In addition, the states’ choices about providing additional benefits to immigrants, whether pre-reform or new, were made during favorable economic times and could change during less prosperous times. Furthermore, how federal, state, and local agencies will enforce the new affidavits of support is unknown. In general, it is too soon to measure the long-term impact of welfare reform on immigrants and immigration. In commenting on a draft of this report, HHS took no exception with the report findings, and USDA generally agreed with the findings and observations. Their comments are included in appendixes II and III, respectively. USDA also noted the recent enactment of legislation that restores eligibility for federal food stamp benefits to approximately 250,000 legal immigrants beginning in November 1998, which the report discusses. In addition, USDA stated that it is too early to know the extent to which states operating state-funded food assistance programs will continue their programs for those noncitizens in need of food assistance who remain ineligible for federal benefits. We agree that it is too early to know how the states will respond to this new legislation. HHS and USDA also provided technical comments, which we incorporated as appropriate. We also provided copies of a draft to SSA, INS of the Department of Justice, and the states of California, New Jersey, and Washington. They provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretaries of USDA and HHS and the Commissioners of SSA and INS. We will also make copies available upon request. If you or your staff have any questions about this report, please contact Gale C. Harris, Assistant Director, at (202) 512-7235, or Suzanne Sterling, Senior Evaluator, at (202) 512-3081. Other major contributors to this report are Elizabeth Jones, Deborah Moberly, and Julian Klazkin. This appendix summarizes information on the benefits available to needy immigrants in the locations we visited: San Francisco County, California; Essex County, New Jersey; and Seattle, Washington. The information reflects the states’ actions before enactment of P.L. 105-185 (signed into law in June 1998) that will restore, effective November 1, 1998, federal food stamp eligibility for some pre-reform immigrants. According to INS, as of April 1996 California had about 3.7 million or 35 percent of immigrants living in the United States and ranked as the state with the largest immigrant population. Besides continuing to provide Temporary Assistance for Needy Families (TANF) and Medicaid benefits to immigrants, California is funding a food stamp program for some of those who lost federal benefits. In addition to the state programs available, San Francisco County provides immigrants with food assistance through local food banks, cash benefits through general assistance, and naturalization assistance through community-based organizations. California chose to provide TANF—through the state’s CalWORKS program—to immigrants regardless of their date of entry into the country. In May 1997, the immigrant caseload of 199,381 accounted for almost 22.5 percent of California’s total TANF caseload, according to state estimates. At an average grant of $192 a month, it would cost the state over $178,000 a month to provide state-funded cash assistance to the 931 eligible new immigrant families it estimated would enter California between August 22, 1996, and December 31, 1997. In addition to TANF-comparable benefits, California is providing Medicaid or comparable medical assistance—through its Medi-Cal program—to immigrants regardless of their date of entry, which has increased state spending and prompted changes to state and county data systems to track costs. California officials estimate that about 2,797 or 20 percent of new immigrants will apply for Medi-Cal benefits each month. On the basis of this estimate, by the year 2001, an additional 168,000 immigrants would apply for the state-funded Medi-Cal benefits. California does not fund statewide assistance specifically to replace SSI benefits; however, counties must have general assistance programs. These benefits may be available for nondisabled pre-reform immigrants who are not already receiving SSI and for new immigrants who are no longer eligible for SSI. San Francisco County, for example, provides general assistance to immigrants no longer eligible for SSI of up to $345 per month, which is lower than the average SSI benefit for immigrants of $532 per month. According to a study done in San Francisco County, for each elderly and disabled immigrant no longer eligible for federal assistance on the basis of immigration status who receives general assistance or some form of local cash assistance, the city and county will incur an additional annual cost of between $4,140 and $7,800 per person. If SSI benefits had not been restored, San Francisco estimated that it would have cost the city and county as much as $31 million to provide general assistance to an estimated 7,500 immigrants during the first fiscal year after the termination of SSI benefits. The state created the California Food Assistance Program for Legal Immigrants to provide food stamps to certain categories of pre-reform immigrants. The state-funded food stamps provide these immigrants with the same amount of benefits as those previously received under the federal program and are available to those pre-reform immigrants younger than 18 and aged 65 and over. The program, which is authorized to operate through July 1, 2000, received appropriations of $35.6 million for fiscal year 1998. Begun on September 1, 1997, the program replaces lost federal food stamps for about 56,000 of the 151,700 pre-reform immigrants who lost their federal benefits, according to state estimates. New immigrants are not eligible for state-funded food assistance; however, some local food assistance is available, officials said. Although San Francisco County explored the possibility of providing a food stamp program for those no longer eligible for federal or state food stamps, such as adults under age 65, it has not established such a program. The county, however, provided additional funding of $186,000 to a local food bank to increase purchases and add three or four new distribution centers targeted to reach immigrants no longer eligible for food stamps. These immigrants received notice by mail of the new centers and were told to present their letters at the distribution centers to receive food, which they may claim on a weekly basis. To increase immigrants’ use of the distribution centers, the county and the local food bank are also planning to provide more culturally appropriate foods. California has no statewide naturalization assistance program; however, selected counties and localities in the state provide some assistance. Thirty-five of the state’s 58 counties provide some naturalization assistance. San Francisco County formed the Naturalization Project to provide assistance targeted to the most vulnerable of the immigrant population—those expected to lose SSI before its retention and those scheduled to lose federal food stamp benefits. The goals of the project—comprised of a coalition of city and county government departments, community-based organizations, senior services providers, schools, colleges, private businesses, foundations, and concerned citizens—are to substantially expand service capacity; guarantee responsive, individualized high-quality services; and create a structured network of community services by leveraging all available public, private, and community resources. Funding for this project includes a grant of over $1 million from a private foundation for 1997. According to INS, as of April 1996 New Jersey had approximately 462,000 or over 4 percent of immigrants living in the United States, making it the state with the fifth largest immigrant population. Along with choosing to continue TANF and Medicaid benefits for pre-reform immigrants and to provide these benefits to new immigrants after the federal 5-year bar, New Jersey devised a new state-funded food stamp program to replace lost federal benefits and a statewide naturalization assistance program. In addition to these state-level programs, Essex County provides some food assistance to its immigrants through local food pantries and soup kitchens. New Jersey chose to continue TANF benefits for pre-reform immigrants and to provide these benefits to new immigrants following the federal 5-year bar. The Work First New Jersey program, which is administered at the county level, provides these benefits. New Jersey combined its TANF and general assistance programs in January 1997 to form the Work First New Jersey program. The state provides no state-funded cash assistance to new immigrants during the 5-year federal bar. New Jersey provides Medicaid to pre-reform and new immigrants following the 5-year bar. In addition, the state provides funding for Medicaid-comparable assistance to new immigrants during the 5-year federal bar. The state, however, plans to reduce the medical benefits available to new immigrants to emergency services only. According to New Jersey officials, the state must pass legislation to change the current state law, which requires full medical benefits for all individuals, including immigrants. New Jersey officials also noted that an estimated 2,000 noncitizens no longer eligible for federal Medicaid assistance because they did not meet the new qualifications in the welfare reform law, such as Permanently Residing Under the Color of Law (PRUCOL), were receiving state-funded medical assistance. When the state passes legislation, 1,900 of these individuals’ medical assistance benefits will be reduced to cover only emergency services. In addition to providing Medicaid and state-funded medical assistance, the state funds several hospitals to treat indigent individuals, including immigrants, through New Jersey’s Charity Care program. Along with the TANF portion of the Work First New Jersey program discussed, the general assistance portion of the program provides benefits to single adults or childless couples. Certain noncitizens who remain in the country legally, such as PRUCOLs, but no longer meet the eligibility criteria for federal programs may be eligible for general assistance. They may receive benefits until they can apply for naturalization, as well as for an additional 6 months after they apply, which was the time estimated for completing the naturalization process. State officials were unsure, however, whether the 6-month restriction would be enforced because the average naturalization processing time in New Jersey now is much longer than the 6-month estimate. New immigrants are barred from receiving Work First New Jersey benefits during the first 5 years of residency in the country. The benefit level of general assistance provided through Work First New Jersey averages $140 per month for employable individuals and $210 per month for unemployable individuals; both rates are lower than the average monthly SSI benefit of $515.25. New Jersey created the State Food Stamp program in August 1997 to provide benefits for certain categories of pre-reform immigrants who lost their federal food stamps—those younger than 18, aged 65 and over, or who are disabled. This program, which was created by an executive order of the state’s governor, provided $15 million for contracting with USDA to purchase federal food stamp benefits for this population through June 1998. However, as of June 12, 1998, legislation was pending to continue the state-funded food stamp benefits beyond this time. The legislation would also expand eligibility to include those between the ages of 18 and 65 who have at least one child under 18. The program’s eligibility criteria and benefit levels mirror the federal program’s, with the exception of not requiring citizenship. In addition, the program requires participants to apply for citizenship within 60 days of their eligibility to do so. New Jersey officials originally estimated that 17,000 immigrants lost their federal food stamp benefits due to welfare reform changes; however, as of February 1998, officials reported that the program was providing state-funded benefits to about 5,700 immigrants. Although new immigrants are ineligible for state-funded food assistance, all immigrants are eligible to receive food assistance through local food pantries and soup kitchens statewide. New Jersey provided funding for a statewide naturalization assistance program run through a coalition of 31 service providers in the Immigration Policy Network. The program began providing assistance in January 1997 with $2 million in state funds and $2 million in private funds. The project initially targeted those immigrants expected to lose SSI benefits before they were reinstated. Later in the year, the project was expanded with an additional $2 million in public funds and $2 million in private funds to provide assistance to those immigrants scheduled to lose federal food stamps. Services provided through the program include English language and civics classes, legal assistance with applications, and assistance with medical waivers for exemption from citizenship or language testing. As of February 1998, about 4,200 individuals participating in the program had completed naturalization applications. The program is scheduled to continue through December 1998. According to INS, as of April 1996 approximately 174,000 or about 2 percent of immigrants in the United States lived in the state of Washington, making it the state with the 10th largest immigrant population. Anticipating the federal restrictions under welfare reform, the governor proposed programs that would treat immigrants in need the same as citizens. Besides continuing to provide TANF and Medicaid benefits for pre-reform immigrants and providing these benefits to new immigrants following the 5-year bar, Washington devised several new state-funded programs to replace lost federal benefits and provides naturalization assistance as well. In addition to these state programs, Seattle created its own naturalization assistance program for immigrants and refugees losing federal and state benefits. Washington chose to continue TANF benefits for pre-reform immigrants and to provide these benefits to new immigrants after the 5-year bar. In November 1997, the state began providing state-funded cash assistance for new immigrants during the federal 5-year bar. Immigrants are eligible to apply for these benefits after living in the state for 12 months. With the exception of not requiring citizenship, the state-funded program applies the same eligibility and deeming rules as the TANF program and offers the same level of benefits. As of February 1998, approximately 230 immigrant families were receiving state-funded cash assistance at a monthly cost to the state of about $74,000. Washington provides Medicaid benefits to pre-reform and new immigrants following the 5-year bar. In August 1997, the state began providing state-funded medical assistance to new immigrants during the federal 5-year bar if they met the requirements considered to be categorically needy. Like the state-funded cash assistance program, the state medical assistance program requires a residency period of 12 months. With the exception of not requiring citizenship, the program applies the same eligibility criteria and deeming rules as the federal program and offers the same level of benefits. As of December 1997, a total of 389 immigrants participated in the program at cost to the state of approximately $5,200 for that month. In addition to this state-funded medical assistance, some new immigrants may receive additional state or local medical assistance during their 5-year bar. The types of assistance available include medical care services for incapacitated, aged, blind, or disabled people determined eligible for general assistance; emergency medical services; and services for pregnant women and children not eligible for the state medical assistance program. Washington provides general assistance benefits for some new immigrants who are no longer eligible for SSI. Immigrants who are 18 and older and incapable of gainful employment for at least 90 days can apply for the state’s General Assistance-Unemployable program. This program provides an average monthly benefit of $339, which is less than the average monthly SSI benefit of $512. In 1997, Washington created the Food Assistance program to provide state-funded food stamp benefits for pre-reform and new immigrants no longer eligible for federal food stamps. At the state’s initiative, Washington was the first of eight states to contract with USDA to purchase federal food stamps. The eligibility criteria and benefit levels mirror the federal program’s, with the exception of not requiring citizenship. The state program began with a budget of $65 million for fiscal years 1998 and 1999. The state estimated that the program would serve approximately 38,363 immigrants in 1998; however, state officials mentioned that this estimate did not account for those immigrants who became citizens or qualified for federal benefits due to an exception such as being credited with 40 work quarters. As of January 1998, the program was serving about 14,800 immigrants at a cost to the state of approximately $1.7 million for that month. Washington’s naturalization assistance program, which began before welfare reform, targets its assistance to those immigrants expected to lose federal benefits. For fiscal years 1998 and 1999, funding for the program totaled approximately $1.5 million per year. According to state officials, an average of 1,200 immigrants participated in the program each month since July 1997. Washington officials estimate that over 70 percent of the participants complete their classes and file a citizenship application. Services provided through the program include help with completing applications, payment of citizenship application and photograph fees, and training courses to help them pass citizenship exams. Seattle also provides several services for immigrants through its naturalization program—the New Citizen Initiative. Begun in 1996, the program is administered by the city’s Department of Housing and Human Services in partnership with the Seattle Public Library and a consortium of community-based organizations. The program provides a variety of services for immigrants, including a naturalization information clearinghouse, and prioritizes its services for immigrants who are elderly, disabled, or have inadequate language and literacy skills. The city has funded this initiative with $500,000 for fiscal years 1998 and 1999, and private organizations are providing an additional $200,000 in funding. Program officials estimate that assistance will be provided to between 500 and 800 immigrants during 1998. Welfare Reform: States Are Restructuring Programs to Reduce Welfare Dependence (GAO/HEHS-98-109, June 18, 1998). Medicaid: Early Implications of Welfare Reform for Beneficiaries and States (GAO/HEHS-98-62, Feb. 24, 1998). Welfare Reform: State and Local Responses to Restricting Food Stamp Benefits (GAO/RCED-98-41, Dec. 18, 1997). Illegal Aliens: Extent of Welfare Benefits Received on Behalf of U.S. Citizen Children (GAO/HEHS-98-30, Nov. 19, 1997). Alien Applications: Processing Differences Exist Among INS Field Units (GAO/GGD-97-47, May 20, 1997). Food Stamp Program: Characteristics of Households Affected by Limit on the Shelter Deduction (GAO/RCED-97-118, May 14, 1997). Welfare Reform: Implications of Proposals on Legal Immigrants’ Benefits (GAO/HEHS-95-58, Feb. 2, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed Title IV of the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 and the impact its restrictions would have on immigrant children and their families, focusing on: (1) the options states chose regarding Temporary Assistance for Needy Families (TANF) and Medicaid benefits for immigrants and state-funded assistance available to new immigrants during the 5-year bar; (2) for restricted federal programs, Supplemental Security Income (SSI), and food stamps, the number of immigrants, including children, whose federal benefits have been terminated, and the state-funded assistance available to them; and (3) the major implementation issues and challenges state agencies face in administering the provisions changing welfare assistance to immigrants. GAO noted that: (1) although the states could have dropped immigrants from their welfare rolls, most states have chosen to provide some welfare benefit to part of this population; (2) nearly all states have chosen to continue providing federal TANF and Medicaid benefits to pre-reform immigrants and to provide these benefits to new immigrants after 5 years of U.S. residency; (3) about a third of the states use state funds to provide similar benefits to some new immigrants during the 5-year bar; (4) among these states are 6 of the 10 where most immigrants live--2 states provide state-funded medical assistance and 4 states provide both state-funded cash and medical assistance; (5) with the states' continuation of TANF and Medicaid benefits to pre-reform immigrants and the retention of these immigrants' SSI benefits, the greatest economic impact of welfare reform for most of these immigrants is the loss of federally funded food stamp benefits; (6) after the implementation of the food stamp restrictions, an estimated 940,000 immigrants receiving food stamps in 1997 lost eligibility for receiving them; (7) almost one-fifth of this group consisted of immigrant children; (8) at the time of GAO's review, 14 states had created state-funded food stamp programs serving about a quarter of this immigrant group nationwide--primarily children, the disabled, and the elderly; (9) fewer states, however, offer state-funded food stamps to new immigrants; (10) the most recent legislation will restore food stamp eligibility to an estimated 250,000 immigrants, mostly children, the disabled, and the elderly, the same groups targeted by state-funded food stamp programs; (11) states' responses to the restoring of these benefits, such as changing eligibility for state-funded programs, are unknown at this time; (12) with the implementation of the welfare reform restrictions for immigrants, states and local governments face added responsibilities; (13) states' future challenges include verifying the citizenship or immigration status of applicants for all federal public benefits and enforcing affidavits of support for new immigrants sponsored by relatives; (14) the states GAO visited anticipated major systems changes and other additional work to implement the new verification procedures; (15) furthermore, states choosing to provide assistance to immigrants no longer eligible for federal benefits are uncertain about future funding for these programs; and (16) these states also face additional challenges managing funding streams and determining eligibility for federal and state programs.
The Advisers Act generally defines an investment adviser, with certain exceptions, as any individual or firm that receives compensation for giving advice, making recommendations, issuing reports, or furnishing analyses on securities either directly to investors or through publications. As of July 21, 2011, individuals or firms that meet this definition and that have over $100 million in assets under management generally must register with SEC and are subject to SEC regulation. Advisers with less than $100 million in assets under management may be required to register with and be subject to oversight by one or more state securities regulators. The Advisers Act requires investment advisers to adhere to the high standards of honesty and loyalty expected of a fiduciary and to disclose their background and business practices. Traditionally, private funds (such as hedge and private equity funds) have been structured and operated in a manner that enabled the funds to qualify for an exclusion from some federal statutory restrictions and most SEC regulations that apply to registered investment pools, such as mutual funds. For example, in 2008, we found that private equity and hedge funds typically claimed an exclusion from registration as an investment company. By relying on one of two exclusions under the Investment Company Act of 1940, such funds are not required to register as an investment company. The first exclusion is available to private funds whose securities are owned by 100 or fewer investors. The second exclusion applies to private funds that sell their securities only to highly sophisticated investors. To rely on either exclusion, the private fund must not offer its securities publicly. Before the passage of the Dodd-Frank Act, many advisers to private funds were able to qualify for an exemption from SEC registration. Although certain private fund advisers were exempt from registration, they remained subject to antifraud (including insider trading) provisions of the federal securities laws. The Dodd-Frank Act requires that advisers to certain private funds register with SEC by July 21, 2011. Specifically, Title IV of the Dodd- Frank Act, among other things, amends the Investment Advisers Act by  eliminating the exemption from SEC registration upon which advisers to private funds have generally relied—thereby generally requiring advisers only to private funds with assets of $150 million or more to register with SEC;  providing SEC with the authority to require certain advisers to private funds to maintain records and file reports with SEC;  providing exemptions from registration to advisers solely to venture capital funds, advisers to certain private funds with less than $150 million of assets under management, and certain foreign private advisers;  authorizing SEC to collect certain systemic-risk data and share this information with the Financial Stability Oversight Council; and  generally requiring that advisers with assets under management of less than $100 million register with the state in which they have their principal office, if required by the laws of that state. As shown in figure 1, according to SEC staff, 11,505 investment advisers were registered with SEC as of April 1, 2011, of which the staff estimate 2,761 advise private funds. Of these 2,761, approximately 863 registered investment advisers report on their disclosure form that their only clients are private funds, and approximately 1,898 advisers report that they advise private funds and other types of clients, such as mutual funds. When the Dodd-Frank Act’s new registration provisions take effect, the composition of registered investment advisers will change. SEC staff estimates that approximately 3,200 advisers currently registered with SEC will fall below the required amount of assets under management for registration with SEC (increasing from $25 million under current law to $100 million under the Dodd-Frank Act amendments. As a result, they will be required to register with one or more state securities authorities instead of SEC—leaving 8,300 advisers registered with SEC. In addition to these advisers, SEC staff also estimates that (1) approximately 750 new investment advisers to private funds will have to register with SEC because of the elimination of the registration exemption on which private fund advisers have typically relied and (2) approximately 700 new investment advisers will register with SEC as a result of growth in the number of investment advisers (based on historical growth rates). Therefore, SEC staff estimates that there will be approximately 9,750 registered investment advisers after the implementation of these Dodd- Frank Act amendments. However, an estimate of the total number of registered investment advisers with private fund clients remains uncertain, because some of the 2,761 currently registered advisers with private fund clients may be required to deregister with SEC, depending on the amount of their assets under management, and some of the newly registering advisers may advise one or more private funds. Although advisers to certain private funds will be required to register with SEC, the private funds themselves may continue to qualify for an exclusion from the definition of an investment company under the Investment Company Act of 1940. Because private funds typically are not required to register as investment companies, SEC exercises limited oversight of these funds. Nonetheless, the Dodd-Frank Act amends the Advisers Act to state that the records and reports of private funds advised by a registered investment adviser are deemed to be the records and reports of the investment adviser. Thus, according to SEC staff, such records and reports are subject to examination by SEC staff. SEC oversees registered investment advisers primarily through its Office of Compliance Inspections and Examinations, Division of Investment Management, and Division of Enforcement. In general, SEC regulates investment advisers to determine whether they (1) provide potential investors with accurate and complete information about their background, experience, and business practices and (2) comply with the federal securities laws and related regulations. More specifically, the Office of Compliance Inspections and Examinations examines investment advisers to evaluate their compliance with federal securities laws, determine whether these firms are fulfilling their fiduciary duty to clients and operating in accordance with disclosures made to investors, and assess the effectiveness of their compliance-control systems. The Division of Investment Management administers the securities laws affecting investment advisers and engages in rulemaking for SEC consideration and other policy development intended, among other things, to strengthen SEC’s oversight of investment advisers. The Division of Enforcement investigates and prosecutes violations of securities laws or regulations. Securities SROs include national securities exchanges and securities associations registered with SEC, such as the New York Stock Exchange and FINRA. SROs are primarily responsible for establishing the standards under which their members conduct business; monitoring the way that business is conducted; bringing disciplinary actions against their members for violating applicable federal statutes, SEC rules, and their own rules; and referring potential violations of nonmembers to SEC. SEC oversees SROs, in part by periodically inspecting them and by approving their rule proposals. At the time that the system of self-regulation was created, Congress, regulators, and market participants recognized that this structure possessed inherent conflicts of interest because of the dual role of SROs as both market operators and regulators. Nevertheless, Congress adopted self-regulation of the securities markets to prevent excessive government involvement in market operations, which could hinder competition and market innovation. Congress also concluded that self-regulation with federal oversight would be more efficient and less costly to taxpayers. For similar purposes, Congress created a self- regulatory structure for the futures markets. NFA is a futures SRO registered with CFTC as a national futures association. Section 914 of Title IX of the Dodd-Frank Act required SEC to study the need for enhanced examination and enforcement resources for investment advisers. Among other things, SEC was required to study the number and frequency of examinations of investment advisers by SEC over the last 5 years and the extent to which having Congress authorize SEC to designate one or more SROs to augment SEC’s efforts in overseeing investment advisers would increase the frequency of examinations of investment advisers. In January 2011, SEC staff issued the report. SEC staff concluded that the number and frequency of examinations of registered investment advisers have declined over the past 6 years and that SEC faces significant capacity challenges in examining these advisers, in part because of the substantial growth of the industry and the limited resources and number of SEC staff. As a result, SEC staff recommended three options to Congress to strengthen SEC’s investment adviser examination program: (1) imposing user fees on advisers to fund SEC examinations, (2) authorizing an SRO to examine all registered investment advisers, and (3) authorizing FINRA to examine its members that are also registered as investment advisers for compliance with the Advisers Act. In its report, SEC staff discusses the trade-offs of each of these options. Regulators, industry representatives, investment advisers, and others we interviewed told us that it was difficult to opine definitively on the feasibility of forming and operating a private fund adviser SRO because of the many unknown factors, such as its specific form, functions, and membership. Nonetheless, the general consensus was that forming a private fund adviser SRO similar to FINRA could be done but not without challenges. Regulators and industry representatives pointed to the creation and existence of other securities SROs as evidence that forming an SRO to oversee private fund advisers is feasible. However, SEC staff and two securities law experts told us that legislation would be needed to allow a private fund adviser SRO to be formed under the federal securities laws. Moreover, regulators, industry representatives, and others identified a number of challenges to forming a private fund adviser SRO, some of which were similar to the challenges involved in creating other SROs, such as FINRA and NFA. According to SEC staff and two securities law experts, legislation would be needed to allow for the formation of a private fund adviser SRO under the federal securities laws. Neither the Advisers Act nor the other federal securities laws expressly authorize the registration of a private fund adviser SRO. As a result, SEC staff and these experts told us that Congress would need to enact legislation to allow for such an SRO to register with SEC and for SEC to delegate any regulatory authority to the SRO. Past proposals to create an SRO to oversee investment advisers were also predicated on legislation. For example, the House of Representatives passed a bill in 1993 that, among other things, would have amended the Advisers Act to authorize the creation of an “inspection-only” SRO for investment advisers. Congress has taken different approaches in creating different types of SROs and has granted the SROs different authorities. For example, it passed the Maloney Act in 1938, which amended the Securities Exchange Act of 1934 to provide for the registration of national securities associations as SROs for the over-the-counter securities market. This provision led to the registration of the NASD, which later merged with parts of the New York Stock Exchange to become FINRA. National securities associations have broad regulatory authorities, including rulemaking, examination, and enforcement authority. In contrast, Congress in 1975 provided for SEC to establish the Municipal Securities Rulemaking Board—an SRO charged only with issuing rules for the municipal securities industry. More recently, Congress created the Public Company Accounting Oversight Board to oversee the auditors of public companies in the Sarbanes-Oxley Act of 2002. Like FINRA, the Public Company Accounting Oversight Board has broad regulatory authorities, but unlike FINRA, its board is selected by SEC, and its budget, although established by the board, is subject to SEC approval. Previously introduced legislation authorizing the registration of an SRO for investment advisers has ranged from an SRO with potentially broad regulatory authorities similar to those of FINRA to an SRO empowered only to inspect registered investment advisers for compliance with the applicable securities laws. Representatives from all of the investment funds and adviser associations we spoke with opposed forming a private fund adviser SRO, indicating that their members would not voluntarily form or join one. In addition, officials from NASAA and some industry representatives also told us that no basis exists for forming an SRO to oversee private fund advisers. According to NASAA officials, the requirement under the Dodd-Frank Act for certain private fund advisers to register with SEC obviates the need for an SRO for these advisers because SEC and state securities regulators are in the best position to oversee them. Furthermore, representatives from two industry associations told us that the nature of private equity funds and investors obviates the need for an SRO. For example, representatives from one industry association said that the terms of a private equity fund typically are negotiated between an adviser and institutional investors, providing the investors and their lawyers with the opportunity to include any protections they deem necessary. These views suggest that the feasibility of a private fund adviser SRO may depend, in part, on whether legislation authorizing such an SRO made membership mandatory for registered investment advisers to private funds. Similarly, in its section 914 study, SEC staff noted that for an investment adviser SRO to be successful, membership would need to be mandatory to ensure that all investment advisers would be subject to SRO examination. For similar purposes, the federal securities and commodities laws require broker-dealers and futures commission merchants dealing with the public to be members of a securities or futures SRO, respectively. Regulators, industry associations, and others told us that forming and operating an SRO to oversee private fund advisers would face a number of challenges. One of the principal challenges would be funding the SRO’s start-up costs. None of the regulators or associations could provide us with an estimate of the start-up costs in light of the many unknown variables, including the SRO’s number of members and regulatory functions. For example, advisers with only private fund clients could be the only advisers required to be members of the SRO. Alternatively, other advisers could also be required to be members, such as advisers with both private fund and other types of clients or advisers managing a certain minimum amount of private fund assets. However, representatives from two industry associations told us that the cost of forming a new SRO would be considerable and that it would exceed the cost of providing resources to SEC to conduct additional examinations of investment advisers to private funds. Data from two of the more recently created SROs show that their start-up costs varied considerably. According to the Public Company Accounting Oversight Board’s 2003 Annual Report, the board’s start-up costs were about $20 million dollars. In contrast, NFA officials told us they used around $250,000 to fund NFA’s start-up in the early 1980s. Another challenge that a private fund adviser SRO could face is establishing and reaching agreement on matters involving the SRO’s organization, including its fee and governance structures. In particular, representatives from industry associations told us that the concentration of assets under management in a small number of large firms may make reaching an agreement on how to assess fees difficult. For example, representatives from one industry association said this condition could present challenges in formulating a fee structure that does not impose too much of a financial burden on smaller advisers or allocate an inequitable share of the fees to the largest advisers. In addition, if the SRO were modeled after FINRA or NFA, it would need to create, among other things, a board of directors to administer its affairs and represent its members. Private funds advisers differ in terms of their business models, investment strategies, and amounts of assets under management. According to several industry associations and firms, such diversity means that each group’s interests may differ from each other, making it difficult to reach key agreements. For example, industry associations said that, among other things, the diversity of the industry with respect to investment strategies and assets under management may make reaching agreement on the allocation of board seats a challenge. More specifically, one industry association stated that the larger firms, if required to pay a large portion of the SRO’s costs, may also want, or develop greater influence over the SRO’s activities. Furthermore, CFTC staff told us that reaching agreements could be complicated by the competitiveness of private fund advisers with each other and their general unwillingness to share their data with each other. According to officials from NFA, which today has a membership of about 4,000 firms and six different membership categories, it took nearly 7 years for the various parties to reach all of the necessary agreements. A private fund adviser SRO may also face challenges in developing, adopting, and enforcing member compliance with its rules, if given rulemaking authority similar to that of FINRA. According to SEC staff and industry representatives, FINRA, like other SROs, traditionally has taken a rules-based approach to regulating its members—adopting prescriptive rules to govern member conduct, particularly interactions between member broker-dealers. Representatives from one industry association told us that SROs traditionally use a rules-based approach, in part, to address the inherent conflicts of interest that exist when an industry regulates itself by minimizing the degree of judgment an SRO needs to use when enforcing its rules, thereby serving to enhance the credibility of self-regulation. In contrast, SEC staff and industry representatives told us that the regulatory regime for investment advisers is primarily principles- based, focusing on the fiduciary duty that advisers owe to their clients. The fiduciary duty has been interpreted through, among other things, case law and enforcement actions (and not defined by rules), and depends on the facts and circumstances of specific situations. According to SEC staff and industry representatives, adopting detailed or prescriptive rules to capture every fact and circumstance possible under the fiduciary duty would be difficult. Further, NASAA officials and industry representatives stated that attempting this approach could result in loopholes that would weaken the broad protections investors are currently afforded. Moreover, SEC staff and some industry representatives told us that the diversity among the different advisers would also make it difficult to adopt a single set of rules for all advisers. For example, SEC staff stated that because of the complex nature of hedge funds (such as their changing investment strategies), regulations will need to be constantly monitored for effectiveness and updated as needed; and as such, it may not be feasible to adopt detailed or prescriptive rules. Like private fund advisers, SROs, and other financial industry regulators, a private fund adviser SRO could face a challenge in attracting, hiring, and retaining qualified personnel. According to industry representatives, no organization other than SEC has experience and expertise regulating investment advisers. Private fund advisers told us that an SRO would have to compete with private fund advisers and other financial services firms for the limited number of individuals with the skills needed to establish or assess compliance with federal securities laws. For example, as registered investment advisers, private fund advisers may need to hire staff, including a chief compliance officer, to comply with SEC regulations requiring advisers to have effective policies and procedures for complying with the Advisers Act. According to two industry participants, the Dodd- Frank Act will likely further increase the need for individuals with these skills at various types of financial services firms as more entities are brought under regulation and additional requirements are placed on regulated firms. In addition to private entities, an SRO would be competing with SEC for these individuals. For example, SEC has estimated that it will need to hire about 800 staff over the next several years—contingent on its budget requests—to help implement its regulatory responsibilities under the Dodd-Frank Act. Some of the challenges of forming a private fund adviser SRO may be mitigated if the SRO were formed by an existing SRO, such as FINRA, but other challenges could remain. Representatives from FINRA, NFA, and an industry association told us that an existing SRO may have access to internal funds to help finance the start-up costs of a private fund adviser SRO. An existing SRO also may have in place the necessary offices and other infrastructure. Finally, FINRA officials said that an existing SRO may be able to leverage some of its staff and staff development programs. At the same time, however, a few of the representatives from industry associations we spoke with said that even an existing SRO would face start-up challenges. They told us that an existing SRO would still face the challenges of hiring new staff or training existing staff to examine advisers for compliance with the Advisers Act, given that no SRO currently has such responsibility and skills. Moreover, they said that an existing SRO would also face challenges reaching agreement on, among other things, the SRO’s governance structures. Under Title IV of the Dodd-Frank Act, SEC is required to assume oversight responsibility for certain investment advisers to private funds. According to SEC staff, the agency plans to examine registered private fund advisers through its investment adviser examination program, as it has done in the past, and has taken steps to handle the increased number of examinations of such advisers. These steps include providing training on hedge and private equity funds, identifying staff with private fund experience or knowledge, prioritizing the hiring of candidates with private fund experience, and bringing in outside experts to educate staff about private fund operations. However, SEC staff’s section 914 study reported that without a stable and scalable source of funding that could be adjusted to accommodate growth in the industry, SEC likely will not have sufficient capacity in either the near or long term to effectively examine registered investment advisers with adequate frequency. We have also previously found that SEC’s examination resources generally have not kept pace with increases in workload, which have resulted in substantial delays in regulatory and oversight processes. In addition, we have previously reported that, in light of limited resources, SEC has shifted resources away from routine examinations to examinations of those advisers deemed to be of higher risk for compliance issues. One trade-off to this approach we identified was that it may limit SEC’s capacity to examine funds considered lower risk within a 10-year period. According to securities regulators and industry representatives, a private fund adviser SRO could offer a number of advantages and disadvantages. A private fund adviser SRO could offer the advantage of helping augment SEC’s oversight of registered private fund advisers and address SEC’s examination capacity challenges. Through its membership fees, an SRO could have scalable and stable resources for funding oversight of its member investment advisers. As noted by SEC staff in its section 914 study, an SRO could use those resources to conduct earlier examinations of newly registered investment advisers and more frequent examinations of other registered investment advisers than SEC could do with its current funding levels. As evidence of this possibility, SEC staff cited FINRA’s and NFA’s abilities to examine a considerably larger percentage of their registrants in the past 2 years compared with those of SEC. In addition, an SEC commissioner stated that an SRO would have the necessary resources to develop and employ technology to strengthen the examination program, provide the examination program with increased flexibility to address emerging risks associated with advisers, and direct staffing and strategic responses that may help address critical areas or issues. While a private fund adviser SRO could help augment SEC’s oversight, its creation would involve trade-offs in comparison to direct SEC oversight. Many of the advantages and disadvantages of a private fund adviser SRO are similar to those of any type of SRO, which have been documented by us, SEC, and others. Advantages of a private fund adviser SRO include its potential to (1) free a portion of SEC’s staff and resources for other purposes by giving the SRO primary examination and other oversight responsibilities for advisers that manage private funds, (2) impose higher standards of conduct and ethical behavior on its members than are required by law or regulations, and (3) provide greater industry expertise and knowledge than SEC, given the industry’s participation in the SRO. For example, according to FINRA officials, the association, as an SRO, is able to raise the standard of conduct in the industry by imposing ethical requirements beyond those that the law has established or can establish. In doing so, FINRA can address dishonest and unfair practices that might not be illegal but, nonetheless, undermine investor confidence and compromise the efficient operation of free and open markets. Some of the disadvantages of a private fund adviser SRO include its potential to (1) increase the overall cost of regulation by adding another layer of oversight; (2) create conflicts of interest, in part because of the possibility for self-regulation to favor the interests of the industry over the interests of investors and the public; and (3) limit transparency and accountability, as the SRO would be accountable primarily to its members rather than to Congress or the public. For example, an SRO would have primary oversight for it members, but SEC currently conducts oversight examinations of a select number of FINRA members each year to assess the quality of FINRA’s examinations. Although these examinations serve an oversight function, we previously have found that they expose firms to duplicative examinations and costs. SEC staff told us that estimating the extent to which, if any, a private fund adviser SRO would reduce the agency’s resources burden is difficult, given the hypothetical nature of such an SRO. Nonetheless, available information suggests that a private fund adviser SRO may free little, if any, SEC staff and resources for other purposes. Although SEC does not collect specific data on the number of investment advisers that have private fund clients, as discussed earlier, its staff estimate that 2,761 of the 11,505 registered investment advisers (as of April 1, 2011) report having private funds as one or more of its types of clients. If, for example, a private fund adviser SRO were limited to those advisers with only private fund clients and were to have primary responsibility for examining its members, it could relieve SEC from having to examine approximately 863 advisers. However, SEC still would have oversight responsibility for over 10,600 registered investment advisers that do not solely advise private funds. As a result, SEC may need to maintain much, if not most, of the resources it currently uses to oversee investment advisers because it would have oversight responsibility for the majority of the registered investment advisers, as well as the private fund adviser SRO. In contrast to a private fund adviser SRO, a broader investment adviser SRO could have primary responsibility for examining all of the 11,505 registered investment advisers, including private fund advisers, and thus reduce SEC’s resource burden by a greater extent. A private fund adviser SRO could also create regulatory gaps in the oversight of registered investment advisers. Representatives from an investment adviser firm told us that it is common for advisers with a large amount of assets under management to manage portfolios for institutional clients, mutual funds, and private funds. The investment personnel and support functions often overlap, and a single portfolio management team often manages all three types of client portfolios. According to securities regulators, industry representatives, and others, if a private fund adviser SRO’s jurisdiction was limited to only an adviser’s private fund activities, the SRO would not be able to oversee and understand the full scope of activities of advisers with private fund and other clients. For example, representatives from an industry association told us that advisers typically maintain policies and procedures to allocate grouped trades (such as shares of an initial public offering) fairly among clients and avoid providing preferential treatment to a fund that pays performance fees at the expense of a fund that does not. An SRO with jurisdiction over only an adviser’s private fund activities might not be able to detect trade allocation abuses involving an adviser’s private fund and other clients. In such a case, SEC would be responsible for detecting such abuse and, therefore, may need to examine an investment adviser’s relationship with its private fund clients—which could duplicate the SRO’s efforts. In addition, a private fund adviser SRO could create conflicting or inconsistent interpretations of regulations. The formation of a private fund adviser SRO would result in the SRO overseeing investment advisers to private funds and SEC overseeing all other investment advisers. A securities regulator, industry representatives, and others told us that through examinations or enforcement actions, a private fund adviser SRO could interpret a regulation one way for its members, but SEC could interpret the same regulation another way for advisers that are not members of the SRO. Furthermore, for advisers with both private fund and other clients, if the SRO’s jurisdiction were limited to an adviser’s private fund activities, the opportunity would exist for the SRO to interpret a regulation one way for the adviser with respect to its private fund clients and for SEC to interpret the regulation a different way for the same adviser with respect to its other clients. Representatives from an industry association commented that SEC would have to spend significant amounts of time ensuring that the SRO and SEC staffs are applying the rules consistently among similar situations and circumstances, which would include writing guidance on interpretations beyond what is normally done. Finally, a private fund adviser SRO could result in duplicative examinations of investment advisers. As discussed earlier, many advisers with large portfolios manage assets for multiple types of clients, such as private and mutual funds, and have certain functions that serve all of their clients. According to securities regulators and industry representatives, for such advisers, their shared functions could be examined by both SEC and a private fund adviser SRO, if the SRO’s jurisdiction was limited to an adviser’s private fund activities. For example, the SRO could examine an adviser to ensure that it complied with its trade allocation policies and procedures for trades executed on behalf of its private funds, and SEC could examine the same policies and procedures to ensure that the adviser complied with them for trades executed on behalf of the adviser’s other clients. These advisers could then be reexamined through SEC’s oversight examinations. As required by the Dodd-Frank Act, SEC is taking steps to assume responsibility for registering and overseeing certain investment advisers to private funds. However, in its section 914 study, SEC staff concluded that the agency likely will not have sufficient capacity to effectively examine registered investment advisers, including private fund advisers, with adequate frequency. A private fund adviser SRO is one of several options that could be implemented to help address SEC’s examination capacity challenges. However, doing so would involve trade-offs, including lessening SEC’s capacity challenges versus increasing potential regulatory gaps, inconsistencies, and duplication in the oversight of registered investment advisers. As recommended by SEC staff in its recent study, other options to address SEC’s capacity challenges include creating an SRO to examine all registered investment advisers or imposing user fees on advisers to fund SEC examinations. Like the private fund adviser SRO option, these two options would involve trade- offs that would have to be considered. We provided a draft of this report to SEC. SEC staff provided technical comments, which we incorporated, as appropriate. We are sending copies of this report to SEC, interested congressional committees and members, and others. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. The objectives of this report were to examine (1) the feasibility of forming and operating a private fund adviser self-regulatory organization (SRO), including the actions that would need to be taken and challenges that would need to be addressed, and (2) the potential advantages and disadvantages of a private fund adviser SRO. Although the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) directs us to assess the feasibility of forming an SRO for private funds, our study focuses on an SRO for private fund advisers. As discussed with congressional staff, the term “private funds,” as used in section 416 of the Dodd-Frank Act, was intended to refer to private fund advisers. The Dodd-Frank Act amends the federal securities laws to require certain advisers to private funds, not the funds themselves, to register with the Securities and Exchange Commission (SEC). Securities SROs serve to help enforce the federal securities laws applicable to their members. An SRO for private funds (not advisers) would not serve that purpose, because private funds could continue to qualify for exclusions from registering with SEC and thus would not generally be subject to the federal securities laws. To focus our discussions with regulators, industry associations, and observers on the feasibility, associated challenges, and advantages and disadvantages of a private fund adviser SRO, we generally predicated our discussions on the assumption that such an SRO would be similar in form and function to the Financial Industry Regulatory Authority (FINRA). To address both objectives, we analyzed the Securities Exchange Act of 1934, Sarbanes-Oxley Act of 2002, and Commodity Exchange Act to identify characteristics of the various types of existing SROs, including their registration requirements, regulatory functions, and oversight framework. In addition, we reviewed past regulatory and legislative proposals for creating an SRO to oversee investment advisers or funds, relevant academic studies, SEC staff’s Study on Enhancing Investment Adviser Examinations (as mandated under section 914 of the Dodd-Frank Act) (section 914 study), and related material to gain insights on the potential form and functions of a private fund adviser SRO. We did not evaluate the findings of the study or the staff’s conclusions regarding the investment advisers examination program. We also reviewed letters received by SEC in connection with its section 914 study, comment letters on past proposals for an investment adviser or fund SRO, and other material to document the potential challenges in—and advantages and disadvantages of—creating a private fund adviser SRO. We obtained information on the number of registered investment advisers from SEC based on information in the Investment Adviser Registration Depository, as of April 1, 2011. Using this database, SEC provided us estimates of the number of advisers with only private fund clients and the number of advisers with private fund and other types of clients. SEC staff derived these estimates based on information from Form ADV—the uniform form that is used by investment advisers to register with SEC, which requires information about, among other things, the investment adviser’s business and clients. Form ADV does not currently include a specific question on whether the adviser is an adviser to private funds. To estimate the number of advisers that potentially advise private funds, SEC includes the number of advisers whose response to Form ADV’s Item 7.B equaled “yes” and Item 5.D(6) is not 0 percent. Item 7.B asks the investment adviser whether it or any related person is a general partner in an investment-related limited partnership or manager of an investment- related limited liability company, or whether it advises any other “private fund,” as defined under SEC rule 203(b)(3)-1. Item 5.D(6) asks the adviser to identify whether it has other pooled investment vehicles (e.g., hedge funds) as clients and if so to indicate the approximate percentage that these clients comprise of its total number of clients. We attribute these estimates to SEC even though we were able to replicate their estimates using these procedures. We found these figures to be sufficiently reliable for the purposes of showing estimated numbers of registered investment advisers serving private clients. We interviewed regulators, including SEC, the Commodity Futures Trading Commission, FINRA, and the National Futures Association. We also interviewed representatives from the following 10 relevant industry associations representing investment advisers and private or other types of funds. Representatives of 17 advisory firms and/or investors in private funds who were members of some of these associations also participated in the interviews.  Alternative Investment Management Association  Association of Institutional Investors  Coalition of Private Investment Companies  North American Securities Administrators Association  Private Equity Growth Capital Council. To gather a diverse set of perspectives, we identified industry associations representing various types of investment funds, advisers, and investors in private funds by reviewing letters received by SEC in connection with its section 914 study and previous concept releases about an investment adviser SRO. We also drew upon our institutional knowledge. In addition, we interviewed market observers including a compliance consultant firm that provides these services to the financial services industry, and two law professors who have written papers on the potential use of an SRO to oversee investment companies and one whose paper focused on an SRO for hedge funds. We conducted this performance audit from August 2010 through July 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Richard Tsuhara, Assistant Director; Rudy Chatlos; Matthew Keeler; Marc Molino; Josephine Perez; Robert Pollard; Linda Rego; and Jennifer Schwartz made major contributions to this report.
Over the past decade, hedge funds, private equity funds, and other private funds proliferated but were largely unregulated, causing members of Congress and Securities and Exchange Commission (SEC) staff to raise questions about investor protection and systemic risk. To address this potential regulatory gap, the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) brought certain advisers to private funds under the federal securities laws, requiring them to register with SEC. The Dodd-Frank Act also requires GAO to examine the feasibility of forming a self-regulatory organization (SRO) to provide primary oversight of private fund advisers. This report discusses (1) the feasibility of forming such an SRO, and (2) the potential advantages and disadvantages of a private fund adviser SRO. To address the mandate, GAO reviewed federal securities laws, SEC staff's recently completed study on its investment adviser examination program that was mandated by the Dodd-Frank Act, past regulatory and legislative proposals to create an SRO for investment advisers, and associated comment letters. GAO also interviewed SEC and SRO staffs, other regulators, and various market participants and observers. We provided a draft of this report to SEC for review and comment. SEC staff provided technical comments, which we incorporated, as appropriate. Regulators, industry representatives, investment advisers, and others told GAO that it was difficult to opine definitively on the feasibility of a private fund adviser SRO, given its unknown form, functions, and membership. Nonetheless, the general consensus was that forming a private fund adviser SRO could be done, as evidenced by the creation and existence of other SROs. At the same time, they said that the formation of a private fund adviser SRO would require legislation and would not be without challenges. SEC staff and securities law experts said that the federal securities laws currently do not allow for the registration of a private fund adviser SRO with SEC. In addition, regulators, industry representatives, and others told GAO that forming such an SRO could face challenges, including raising the necessary start-up capital and reaching agreements on its fee and governance structures. Some of the identified challenges are similar to those that existing securities SROs had to confront during their creation. Creating a private fund adviser SRO would involve advantages and disadvantages. SEC will assume responsibility for overseeing additional investment advisers to certain private funds on July 21, 2011. It plans to oversee these advisers primarily through its investment adviser examination program. However, SEC likely will not have sufficient capacity to effectively examine registered investment advisers with adequate frequency without additional resources, according to a recent SEC staff report. A private fund adviser SRO could supplement SEC's oversight of investment advisers and help address SEC's capacity challenges. However, such an SRO would oversee only a fraction of all registered investment advisers. Specifically, SEC would need to maintain the staff and resources necessary to examine the majority of investment advisers that do not advise private funds and to oversee the private fund adviser SRO, among other things. Furthermore, by fragmenting regulation between advisers that advise private funds and those that do not, a private fund adviser SRO could lead to regulatory gaps, duplication, and inconsistencies.
The Bureau’s testing program for the 2010 Census relied principally on a small number of large-scale census-like tests. Specifically, the 2010 testing program included two national tests on the content of questionnaires in 2003 and 2005, two site tests focused on data- collection methods and systems in 2004 and 2006, and a final “dress rehearsal” at two sites in 2008. The dress rehearsal, considered to be the final step of a decade of research and testing, had the primary focus of testing automated field operations and their interfaces. The Bureau previously reported that implementing the census tests, including the dress rehearsal, cost about $108 million. As part of the Bureau’s effort to conduct the 2020 Census at a cost lower than the 2010 Census, it plans to invest in early research and conduct smaller more frequent tests to inform its 2020 Census design decisions. The lifecycle for 2020 Census preparation is divided into five phases, as illustrated in figure 1. The Bureau is attempting to frontload critical research and testing to an earlier part of the decade than it had in prior decennials. It intends to use the early research and testing phase through fiscal year 2015 to develop a preliminary design and to evaluate the possible impact that changes would have on the census’ cost and quality. By the end of the early research and testing phase, the Bureau plans to decide on preliminary operational designs. In August 2012, as part of the 2020 Census testing program, the Bureau issued a research and testing management plan. The plan defines eight phases of the life cycle for a census field test, as shown in figure 2. According to the plan, the first three phases culminate in the approval of a field test design by a group of senior Bureau managers that provides decision-making support to the 2020 Census program. To facilitate the field test design process, the Bureau developed templates as guidance for developing test designs. The Bureau also developed management plans in specific functional areas. One, a communications and stakeholder engagement plan, identified stakeholder groups involved in 2020 Census planning. Another, a governance plan, identified decision-making bodies for the 2020 Research and Testing program. The plans outline processes the Bureau will implement as it prepares for the 2020 Census. Prior to this, the Bureau used 2010 Census program guidance and standards to govern some of its earliest design discussions. The Bureau plans to conduct 10 field tests in preparation for the 2020 Census during the early research and testing phase. According to the Bureau, not all field tests are alike, and they vary in scope and capability. Some will be designed to encompass more exploratory questions. Others will be designed to more rigorously test the implementation of specific operations. When we initiated this review in early 2013, the Bureau had designed its initial three tests as summarized in table 1. Other planned field tests will cover topics such as building the address list and narrowing possible approaches for self-response, non-response follow-up, and workload management. The field tests will culminate in a larger test late in fiscal year 2015 to further narrow possible design options. Based on our prior work, we identified 25 key practices for a sound study plan. Following these practices before test designs are completed can help ensure that test designs are appropriate, feasible, and produce useful results. We organized the 25 practices into the following six themes. general research design, data collection plan, data analysis plan, design process management. sample and survey, stakeholders and resources, and As demonstrated in figure 3, the Bureau generally followed most of the 25 key practices for two of the three field test designs and at least partially for the third field test design. Research questions frame the scope of a test, drive the design, and help ensure that findings are useful and answer the research objectives. The objectives should be relevant, creating a clear line of sight to the Bureau’s goals for the 2020 Census. In addition, clearly articulating the test design in advance of conducting a test aids researchers in discussing methodological choices with stakeholders. Across the three field tests, the Bureau generally followed three of the general research design practices, followed one practice to a varying degree, but did not follow the practice of identifying potential biases (see table 2). The Bureau defined concepts, considered relevant prior research in each test, and included objectives that were relevant. However, the Bureau omitted research questions from the design of the 2012 National Census Test (2012 NCT). Identifying specific research questions linked to the research objectives helps ensure that answers resulting from the field test will address the needs of the 2020 Census research program reflected in the research objectives. For example, the 2013 Quality Control Test (2013 QCT) design includes an objective to investigate how the Bureau can modernize and increase the efficiency and utility of its field infrastructure. The corresponding research question states that the test will research the feasibility of using Global Positioning System (GPS) data. The test would determine, among other things, if field staff appropriately visited housing units for address listing and enumeration and if GPS data can be used to reduce or eliminate field quality control checks. None of the three test designs addressed potential biases, such as cultural bias. If a test design does not address potential biases, systematic errors could be introduced. Such errors could affect the accuracy of the test and thus potential design decisions for the 2020 Census. Identifying data sources and data collection procedures is key to obtaining relevant and credible information. The Bureau generally followed two of the data collection practices for all three tests and followed the others to varying degrees (see table 3). In its initial three field test designs, the Bureau generally followed the practices of clearly presenting how data will be collected, and describing a method to encourage responses, as applicable. The three other practices were followed less consistently. These practices help researchers ensure that the data collected for a test will be sufficient and appropriate. First, only the 2013 National Census Contact Test (2013 NCCT) design included a plan for administering and monitoring data collection. The test designs should include data collection procedures that will obtain relevant and credible information to ensure that the resulting information meets the decision maker’s needs. In its design documents, Bureau officials explained they would collect data using telephone interviews from Census Bureau contact centers, outbound interviewing, and telephone questionnaire assistance. The Bureau further explained that survey data and the information on the outcomes of calls would be provided to the survey team. Second, the Bureau discussed the level of difficulty in two of the test designs, but did not explain why it would be difficult to obtain the data. Bureau officials stated that with limited resources and based on the importance of the objectives of a given test, they will not be able to apply all the practices equally to every test design. For example, the 2013 QCT relied on tests of software by Bureau staff— not a traditional field test involving contact with households. So, although the Bureau identified that it may be difficult to reliably identify deviations in procedures using GPS, it did not include an explanation or mitigation of the possible difficulty. Third, the Bureau only identified factors that may interfere with data collection in the 2013 NCCT design. Pre-specifying a data analysis plan as part of a test design can help researchers select the most appropriate data to measure and the most accurate and reliable ways to collect them. Although the Bureau generally followed two of the four practices related to a data analysis plan in the test designs that we reviewed, its discussion of the possible limitations of findings or test results varied (see table 4). For all of the test designs, the Bureau generally identified a basis for comparing the test results and included a proposed design or research plan that was directly related to the objectives and/or questions. In addition, analytical techniques were proposed for two of three of the test designs. While in the 2012 NCT design the Bureau documented that the information from re-interviewing respondents will be used to validate their initial responses, officials did not discuss how they would match this information. Lastly, only the 2013 NCCT design included a discussion of possible limitations. For example, in its design the Bureau noted that one of the data sources used in the test might not be representative. Discussing possible limitations is important so that it is clear what the test design can and cannot address, and so that test results are not overly generalized. A survey with an appropriate sample design can provide descriptive information about a population and its subgroups, as well as information about relationships among variables being measured. In addition, researchers should consult prior relevant research and test any new questions so that the survey questions will elicit appropriate information from respondents to address the Bureau’s data needs. For all three tests, the Bureau generally followed two of the sample and survey practices, while use of the third practice varied across the field tests (see table 5). Across the test designs, the Bureau included both a discussion of how to reach the intended sample and the status of the survey instrument or questionnaire, as applicable. While all of the test designs included a rationale for the type of sample, the 2013 NCCT design also included a rationale for the size of its sample. Bureau documents showed this sample size was selected due to the absence of any documentation of a prior study and the test team’s conservative estimation of the response rate. Further, the Bureau noted the estimated response rate and selected sample size needed to enable the team to determine the quality and comprehensiveness of the data in its analysis. Test designs that explain how their sampling methodology will yield information of sufficient quality for its intended purpose provide a better justification for their cost. Managing stakeholders, identifying team member responsibilities, and identifying resources are key to a test’s success, as people are the primary resource of a high-performing organization. For the 2013 test designs, the Bureau followed all of the stakeholder and resource practices. The 2012 test design did not follow three of the four practices (see table 6). The Bureau included a timeline and required resources in each field test design, including how much each field test would cost. The Bureau also generally followed the other practices related to stakeholders and resources for two of the test designs. However, for the 2012 NCT, officials did not identify stakeholders, their respective roles in the test, or their involvement in developing the test design. The latter two tests were designed with management plans for communication, stakeholder engagement, and governance, which for example, states that stakeholders’ roles should be defined and that their feedback should be gathered. The 2013 Quality Census Test design documented the role that a stakeholder had in outlining how the Bureau can increase data accuracy. By including these practices in guidance, the Bureau has better ensured that its people and resources are being effectively and efficiently leveraged during the development of future 2020 Census tests. Good management of the design process can help managers identify factors that can affect test quality, such as potential risk of delays and challenges to the lines of communication. Across the three tests, the Bureau’s governance process for developing test designs varied in following the four practices (see table 7). First, the Bureau identified clear reporting relationships for only the 2013 QCT. It partially followed this practice for the 2013 NCCT design, and did not follow it for the 2012 NCT design. For the two latter tests, the Bureau utilized membership lists and responsibilities matrices to identify test and project teams, and the assigned tasks and deliverables. Second, the Bureau identified review and approval roles for two of the test designs, but not for the 2012 NCT. For example, for the 2013 QCT design, the Bureau identified which individuals were supposed to review and approve certain design documents. When authority is clearly assigned and communicated, individuals can be held responsible accordingly. Third, the Bureau’s documentation of performance measures and timelines with associated milestones for all three test designs did not identify how the measures would be monitored. For example, for the 2013 QCT design the Bureau included a list of deliverables with associated dates, such as sending an initial study plan to senior Bureau officials for their review. However, it did not indicate how the Bureau would know whether these deliverables were implemented by the indicated dates. Measuring and monitoring performance allows Bureau managers to track progress toward their goals and have crucial information on which to base their organizational and management decisions. Fourth, the Bureau did not follow its guidance for approving its 2013 NCCT test design and only partially followed it for the 2013 QCT design. According to the Bureau’s August 2012 research and testing management plan, the test designs should be approved at four different stages. The test design phase is complete after the fourth approval. The Bureau partially followed this practice for the 2013 QCT by documenting approval of its design at only one of the stages. According to Bureau meeting records, senior Bureau officials discussed the 2013 NCCT design after its implementation. Further, Bureau records indicate that senior Bureau officials discussed the 2012 NCT design, before the August 2012 management plan was issued, but did not document approval. In July 2013, the Bureau began using a table that includes test- design approval dates. This practice helps ensure that management’s approval of a plan maintains its relevance and value to management in control over operations. Further, documenting that management has approved a design provides accountability and offers transparency as to when decisions were made. The Bureau’s design templates outline the information that should be included in two of its key design documents, the field test overview and the field test plan. The templates list topics to be discussed in the overview and plan, and, in some cases, provide examples of what staff should include for a topic. We found that the templates did not address some of the practices we identified for a sound study plan. For example, the templates did not require a test design to include (1) discussion of potential biases, (2) identification of factors that could interfere with obtaining data, (3) identification of difficulties in collecting data, and (4) specification of stakeholder’s roles. In response to this audit, the Bureau subsequently revised its field test template to include these four practices as topics to be discussed. As the Bureau works to develop field tests to inform decisions about the 2020 Census, Bureau officials are learning lessons that can strengthen the design of future tests. According to Bureau officials, our audit helped to reinforce the Bureau’s need to draw on early lessons learned from initial tests. These lessons were derived from examining where the Bureau did not follow best practices for study designs and identifying corrective actions. The Bureau has adopted some of these test design lessons and is taking steps to adopt others. Table 8 lists six lessons learned from the initial three field tests. One lesson the Bureau identified is the importance of obtaining buy-in from management early in the test development process. While designing the three initial tests, Bureau field test designers did not brief senior Bureau management on the development of the designs or involve them in the planning or review of data collection methods. In addition, managers of various Bureau divisions responsible for methodology and other subject matter areas requested to be involved in the process earlier. According to Bureau officials, without early involvement, it may be difficult to obtain upper management approval of test designs quickly, which can lead to unexpected late changes or delays in testing. Early managerial involvement can help ensure early agreement on goals, objectives, timing, and capabilities needed to support a test. This lesson complements the practices of identifying stakeholders benefiting from the field test as well as stakeholders involved in the preparation of the design. To involve management earlier, Bureau officials began briefing upper management about the planning of tests during other regularly scheduled agency-wide executive meetings early in the test planning stages. Officials also started conducting one-day planning sessions beginning with tests planned for fiscal year 2014. Since beginning these sessions, Bureau officials said they have improved at communicating input from external experts at the National Academy of the Sciences to upper Bureau management. Further, officials said they have found the sessions to be effective in identifying issues early. Bureau officials said they now intend to hold these planning sessions for each test. In January 2013, the Bureau began convening a Bureau-wide test strategy review body. This panel of experts first met after the 2013 National Census Contact Test was implemented. In February 2013, Bureau officials decided that prior to conducting future tests, design teams would present the plans, sample design, and objectives to the panel. According to the Bureau, the panel will now look at the Bureau’s research strategy and goals, design decisions, and how the field tests will affect design decisions in fiscal years 2014 and 2015, and clarify operational milestones. The first pre-implementation presentation was conducted in February 2013 for the 2013 Quality Control Test. This allowed the test team to clarify the 2013 QCT’s purpose and verify its testing methodology with Bureau-wide experts. Bureau research managers believe the test’s design is better because of these meetings, and expect future test designs to benefit similarly. During our review, we discussed with Bureau officials whether the Bureau took steps to evaluate the test development process after the three initial tests. The officials told us they recently started conducting staff reviews of tests they have implemented. Such post-test reviews allow the Bureau the opportunity to identify any further lessons learned from developing tests to improve either the design or management of remaining tests for the 2020 Census. The Bureau conducted its first post-test review following the 2013 National Census Contact Test. The review documented, for example, that involving stakeholders such as methodologists, in test planning and identifying their roles and responsibilities helps improve communication during the design process. Further, the review documented that test designs should not only identify responsible parties, but have information on what deliverables are expected of these parties. In addition, the Bureau also conducted a review of the 2013 Quality Control Test. Going forward, these reviews will provide the Bureau with additional opportunities to build its knowledge base on conducting small, targeted field tests. The Bureau has taken steps to improve how it monitors the status of field test design deliverables. Bureau officials said that they previously reported on the status of some test deliverables in a biweekly report. However, these reports did not track the status of all deliverables across all the tests. As a result, senior decennial managers had to contact individual test team leaders to obtain the status for each of the initial test designs. Bureau officials acknowledged that our review led them to realize that with additional field test designs being created, monitoring across all field tests would improve their test status reporting process, and increase their efficiency in collating status information for managers. In July 2013, a Bureau official informed us that they began using a new tracking sheet to monitor the progress of field test deliverables. The new tracking tool provides a more comprehensive and global perspective on the status of deliverables across all tests. Bureau officials described this as an evolving process and said that they plan to take additional steps to develop a process for monitoring the status of field tests as well. The Bureau has also recognized the importance of keeping team leaders informed about key design elements. According to the Bureau, design teams are required to submit certain documents for field test design reviews and approvals. Testing guidance is available electronically. Newly assigned team leaders are individually emailed links to baseline documents. New team leaders are also provided a team leader handbook. However, the handbook does not identify which documents are required for field test design development, nor does it indicate which documents were required for submission for test design reviews or approvals. Without having a listing of required documents, the Bureau risks duplicating its efforts to keep team leaders informed of key design elements. To ensure that team leaders are consistently informed of field test development guidance and the documents that should be prepared to support test design reviews and approvals, Bureau officials said they plan to include a listing of such documents in the team leader handbook. Bureau officials acknowledged that our work offered a way of improving how some information is disseminated to team leaders. The effort to revise the team leader handbook is in progress, but Bureau officials could not provide a timeline for completing it. Achieving a consistent understanding among team leaders of documents required for field test design approvals could help reduce possible delays in the test design review process. In response to this audit and as part of its effort to adapt its management structures to oversee multiple census field tests being developed concurrently, Bureau officials say that they are realigning field test governance processes to improve communication and accountability. Further, they said that the Bureau has already taken steps to identify one point of contact for each future test. Previously, a field test coordinator had to track input from various project team leaders involved with a particular census field test. This lesson complements the practice of identifying clear internal reporting relationships, including who reports to whom and points of contact for different types of information for a sound research plan. In addition to identifying reporting relationships, the Bureau acknowledged that taking steps like establishing one point of contact will help it to more effectively maintain clear lines of communication, and establish accountability when it develops a field test. While the Bureau has taken some initial steps to implement its proposed restructuring, such as conducting a field test management group meeting to further integrate the 2020 field tests across projects, it has not formalized other proposed field test management restructuring and guidance revisions. For example, to improve the coordination of field test planning, the Bureau has proposed creating a field test management team that would provide centralized coordination and streamline the test processes. Further, Bureau officials said that the research and testing plan is under review and being updated to reflect the current process for approving field test designs and plans. Internal controls require that agencies complete, within established time frames, all actions that correct or otherwise resolve the matters brought to management’s attention. The controls also require management to periodically evaluate the organizational structure and make changes as necessary in response to changing conditions. But, the proposed restructuring and guidance revisions have not yet been formalized. The Bureau has many other competing priorities that may need attention more urgently and officials could not provide us a timeline or milestones for formalizing the changes. Meanwhile, the Bureau is continuing work on the design of tests. Without a timeline and milestones for this restructuring, the Bureau risks uncoordinated management for its field tests. This puts the field test’s effectiveness and efficiency regarding possible overlap and duplication at risk. Lastly, it is important for the Bureau to document the lessons it has learned from designing its initial tests. In conducting post-test reviews, the Bureau documented some lessons learned from the 2013 National Census Contact Test and 2013 Quality Control Test. However, Bureau officials acknowledged that the Bureau did not consistently document the lessons learned from the design phases for the initial tests. Internal controls require that managers use control activities, such as the production of records or documentation, to aid in managing risk. Documenting these lessons can help reduce the risk of repeating prior deficiencies or inconsistencies that may lead to test development delays. Given the long time frames involved in planning the census, documentation is essential to ensure lessons are incorporated into future tests. In its effort to design smaller and more targeted tests for the 2020 Census, the Bureau has taken important steps that could make its testing strategy more effective. The Bureau’s investments in early testing are intended to validate the use of strategies and innovations geared toward reducing cost. The three test designs we reviewed generally or partially followed most key practices for designing a sound research plan. The Bureau has already begun taking corrective actions on others, in part by adding additional requirements for designs to its standard guidance. Finalizing planned revisions that focus on field test management in the team leader handbook can help improve how team leaders learn about test design elements. Formalizing its proposed field test management restructuring and guidance revisions will enable the Bureau to ensure that there is improved accountability, communication, and monitoring of its test management design processes. Further, documenting lessons learned while designing the initial field tests can increase the Bureau’s ability to take advantage of the prior experiences. By ensuring these practices are consistently used in field tests, the Bureau will increase the soundness of the tests in areas such as design management and stakeholder involvement. This, in turn, will enhance the likelihood that the Bureau achieves its goal of conducting a cost effective census. We recommend that the Secretary of Commerce require the Under Secretary for Economic Affairs who oversees the Economics and Statistics Administration, as well as the Director of the U.S. Census Bureau, to take the following three steps to improve the Bureau’s process of designing its census field tests for the 2020 Census: Finalize planned revisions that focus on field test management in the team leader handbook. Set a timeline and milestones for formalizing proposed field test management restructuring and guidance revisions. Document lessons learned from designing initial field tests. We provided a draft of this report to the Department of Commerce for comment. In its written comments, reproduced in appendix II, the Department of Commerce concurred with our recommendations. The Department of Commerce also provided minor technical comments that were incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Commerce, the Under Secretary of Economic Affairs, the Director of the U.S. Census Bureau, and interested congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (202) 512-2757 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The GAO staff that made major contributions to this report is listed in appendix III. The objectives of our review were to determine to what extent the Bureau followed key practices for a sound study plan in designing the earliest 2020 Decennial Census field tests, and to identify any lessons learned from the design process that may help improve future tests. To identify key practices for a sound study plan, we reviewed program evaluation literature, including our design evaluation guide, our 2004 review of Census Bureau overseas field tests, our 2012 review of the planning of the 2020 Census, and our guide to internal controls. We selected 25 key practices from these sources. We shared these practices with the Bureau and it agreed that they were reasonable. Using these criteria, we evaluated whether the Bureau’s three initial field test designs followed key practices for a sound research plan. Additionally, since the program evaluation literature noted the importance of program management to developing a sound study plan, we also interviewed Bureau officials on the management processes used for developing the designs. We did not evaluate the outcomes of the field tests. To determine to what extent the designs for the initial 2020 Decennial Census field tests were consistent with key practices for a sound study plan, we reviewed Bureau design documents and interviewed Bureau officials about the field test design process. We compared each of the 25 practices to the Bureau’s field test design documents for the three initial tests to answer the question of whether the respective practice was followed. Our determinations provide a measure of the general rigor of the test designs, although they do not recognize the extent to which the Bureau may have considered the key practices later in the life cycle of the designs. After comparing documents provided by the Bureau for each field test to the key practices and determining the extent each practice was followed for each test, we verified the determinations by having different auditors independently determine the extent practices were followed for 25 percent of each others’ initial determinations for each test. We rated each practice as being either “generally followed,” “partially followed,” or “not followed.” We also discussed our preliminary findings with Bureau officials to learn of additional context or documents that we might have missed. Table 9 describes how we made our determinations. For each test, we limited our scope to the “design” of the tests, which is the first three of eight phases of the field test life cycle and includes initiation, concept development, and planning, analysis and design. Our reviews of test designs were designed as snapshots “as of” the approval of the designs by senior management, or at an equivalent stage of their life cycle, intended to benchmark or baseline the preparation of future test designs. As such, our determination that a given test design did not follow a given key practice does not mean that the Bureau did not consider that key practice later in the test’s life cycle. To identify lessons learned from how the tests were designed, we examined where the Bureau had not followed key practices and identified corrective actions needed. We determined the extent to which the key practice criteria were followed and then considered whether there was a pattern or an underlying cause, such as a lack of guidance. We then discussed with Bureau officials what lessons they had learned and what lessons they could implement for future field tests. We conducted this performance audit from January 2013 to October 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Ty Mitchell, Assistant Director; Maya Chakko; Robert Gebhart; Ellen Grady, Wilfred Holloway; Andrea Levine; Donna Miller; and Aku Pappoe made key contributions to this report.
The Bureau is continuing its early testing efforts to prepare for the decennial. These tests must be well designed to produce useful information about how to implement the 2020 Census. The Bureau has completed the designs of three field tests. GAO was asked to monitor the Bureau's testing for the 2020 Census. This report (1) determines the extent to which the Bureau followed key practices for a sound study plan in designing the earliest 2020 Decennial Census field tests, and (2) identifies what lessons were learned that may help improve future tests. To meet these objectives, GAO first selected 25 key practices for a sound research plan after reviewing its program evaluation literature. GAO then compared Bureau field test design documents for its three initial tests to these practices. GAO also examined where the Bureau had not followed key practices, identified actions needed to address them, and interviewed officials about lessons learned. The Census Bureau (Bureau) generally followed most key practices for a sound study plan in designing the three initial field tests. However, some practices were only partially followed. For example, the test designs varied for four practices related to design process management. Good management of the design process can help managers identify factors that can affect the quality of a test, such as potential risk of delays and challenges to the lines of communication. For example, the Bureau generally followed one of the practices for design process management--identifying clear reporting relationships--for only one of the test designs. The Bureau partially followed this practice for another test design, and did not follow it for the third. The Bureau has already begun incorporating lessons learned from its initial field test designs. These lessons include obtaining internal expert review, and conducting reviews after each test to learn additional lessons. The Bureau has also recognized the importance of keeping design team leaders informed about key design elements. Yet the Bureau has not finalized planned revisions to the team leader handbook, which could help implement this lesson. Additionally, the Bureau is realigning field test governance structures to improve communication and accountability. It has already taken such steps as identifying one point of contact for each test. However, GAO found that the Bureau needs to set timelines and milestones to formalize other restructuring proposals for managing field tests, such as creating a field test management team. Having a formalized proposal and guidance revisions will better position the Bureau to improve accountability, communication, and the monitoring of its test design processes. While lessons the Bureau identified should help it better design future field tests, it has not consistently documented these lessons learned. Documenting lessons can help reduce the risk of repeating prior deficiencies that may lead to test development delays, and can reinforce lessons learned. Given the long time frames involved in planning the census, documentation is essential to ensure lessons are incorporated into future tests. GAO recommends that the Secretary of Commerce (1) finalize field test management revisions in the team leader handbook, (2) set a timeline and milestones for formalizing proposed field test management restructuring and guidance revisions, and (3) document lessons learned from designing initial field tests. The Department of Commerce concurred with GAO's findings and recommendations, and provided minor technical comments, which were included in the final report.
The Health Care Financing Administration (HCFA)—an agency of the Department of Health and Human Services—administers the Medicare home health care program. That program has been part of Medicare since Medicare began in 1965 and serves as an alternative to lengthy in-patient hospitalization. Medicare home health costs averaged about a 33-percent per-year growth from 1989 to 1996—from about $2 billion to almost $18 billion. This occurred primarily because the number of beneficiaries receiving services increased as did the number of services per beneficiary. A fiscal intermediary under contract to HCFA determines if a home health agency’s services are reasonable and necessary and, in turn, which agency costs are reimbursable based on Medicare cost reimbursement principles. These principles authorize Medicare intermediaries to reimburse home health care providers their reasonable costs of serving beneficiaries when those claimed costs are found to be necessary, proper, actual, and related to patient care. In this regard, providers certify that they are familiar with the laws and regulations regarding the provision of health care services and that the services identified were provided in compliance with such laws and regulations. Mid-Delta Home Health is one of the largest home health care providers in Mississippi. It is owned and operated by Clara T. Reed, who is Chief Executive Officer and Chief Financial Officer. At the time of our investigation, Mid-Delta Home Health employed over 600 people and consisted of two corporations (in Belzoni and Charleston, Mississippi) that provided home health care through 16 offices in different parts of the state. Medicare reimbursement to Mid-Delta for home health care and rural health clinic services from January 1993 to December 1996 totaled approximately $77.9 million. Mrs. Reed owned and/or controlled a number of related companies and organizations, including P&T Management, Inc., which provided overall management services for Mid-Delta Home Health and its affiliates (rural health care clinics known as Taylor’s Medical Clinics); Mid-Delta Development League, Inc.—a nonprofit, tax-exempt (Internal Revenue Code section 501(c)(3)) organization; and The Care Associates, Inc., a political action committee formed to aid political candidates interested in “the health and welfare of” the poor and needy. See figure 1. Mid-Delta Home Health, in our opinion, violated Medicare cost reimbursement principles in claiming costs that it had not incurred. First, Mid-Delta Home Health presented approximately $226,000 in checks to its employees, representing payment for unused leave time in the 1993-96 period. Mrs. Reed subsequently asked the employees to endorse the checks and give them back to Mid-Delta. When questioned about this, some current and former employees told us that they had felt coerced into giving back the checks. The company then improperly claimed the full amounts of the leave as part of the employees’ payroll costs and was reimbursed by Medicare. Second, Mrs. Reed requested—or, again according to some employees, coerced—Mid-Delta and P&T Management employees to return a certain amount (about 20 percent or more) of their 1996 bonuses to the company. Those on a “special employee” list received larger bonuses by agreeing in advance to return certain amounts of their bonuses (an average of 29 percent) to the company. The bonus paybacks totaled about $170,000, including $80,000 from Mrs. Reed. Mid-Delta improperly claimed, and received reimbursement from Medicare for, the returned bonuses. Mrs. Reed told the employees that the returned unused leave and bonus moneys would support, among other things, an “indigent care fund” for Mid-Delta’s home health care patients who had exhausted their Medicare and Medicaid visits. However, according to Mid-Delta’s controller, the moneys were used largely to offset unpaid bills of private-pay patients of the affiliated Taylor’s Medical Clinics. We determined that Mrs. Reed deposited moneys to P&T Management’s operating account or to the account of a political action committee that she controlled. See figure 2. According to Mrs. Reed, in 1994 after consulting with legal and tax advisors, she discontinued allowing her employees to roll over unused leave from one year to the next. Thus, as a company practice, employees were given checks for the cash value of their unused leave, then were asked to endorse and return them to the company. Further, former and current employees whom we interviewed complained that between 1993 and 1996, employees had been presented with unsigned (nonnegotiable) checks in payment for their unused leave time and were asked—some employees said coerced—to endorse the checks back to the company. Some also complained that in 1993 and 1994, Mrs. Reed had issued stock certificates instead of paying them for unused leave time. At Mrs. Reed’s request, according to employees we interviewed, employees endorsed the back of their checks and returned them. Mid-Delta Home Health officials deposited most of the checks in an account for the indigent care fund and some to the bank account of a political action committee, both controlled by Mrs. Reed. (See fig. 2.) For the 1993-96 period, records show that Mid-Delta employees returned approximately $226,000 in payment for unused leave. Some of the moneys from the account for the indigent care fund were subsequently deposited to P&T Management’s operating account; and Mid-Delta’s Director of Finance confirmed that Medicare had reimbursed the amount claimed for employee payroll costs, including the unused leave. Current and former employees told us that in some instances employees who refused to surrender to what they termed as coercion and return payments for leave faced retaliatory measures, such as demotion or firing. Indeed, two former employees who had been fired from Mid-Delta Home Health believed that they had been fired because they had not returned payments for leave as requested. Mrs. Reed denied this allegation. However, 20 of the 29 employees we interviewed about unsigned leave checks stated that they had endorsed the checks and returned them because they feared losing their jobs if they did not. In some cases in 1993 and 1994, Mrs. Reed gave employees a stock certificate representing an IOU for the monetary value of the checks they had endorsed and returned to the company. She told those employees, according to her statement to us, that she would remember that they had leave coming from the previous year and that they could take a day or so when they needed it. Some former employees complained to us that they had never been paid for their unused leave. When we asked Mrs. Reed about the unsigned checks, she said that she could not cover the employees’ leave checks without causing a cash flow problem. She said that if she had presented signed checks to the employees, they would have cashed them instead of returning them to the company. Mrs. Reed stated that no one was coerced—the employees voluntarily returned money to the company. Mid-Delta Home Health paid bonuses to its employees based on various criteria, such as length of employment and annual salary. However, according to some Mid-Delta employees, a bonus’s amount was also determined by the employee’s willingness to return about 20 percent or more of the bonus to the company. Further, Mid-Delta then claimed, and received, the amount of the bonuses for Medicare reimbursement. (See fig. 2.) Sources informed us that when bonus checks were distributed to employees, Mrs. Reed essentially coerced employees to pay back approximately 20 percent or more of their bonuses. Although several employees told us they had returned their bonuses voluntarily and that they had not felt threatened or coerced, other employees stated that they had complied with Mrs. Reed’s requests for fear of losing their jobs. Mid-Delta and P&T Management employees in December 1996 received over $933,000 in bonuses and returned about $170,000 to the indigent care fund. (See fig. 2.) At least $155,000 was then transferred from that fund to the P&T Management operating account. The $170,000 included $80,000 that Mrs. Reed returned from a $125,000 bonus she had received in December 1996. Further, according to one knowledgeable employee, Mrs. Reed had a list of “special employees” who received larger bonuses than did others if they agreed in advance to give back a certain amount. The source explained that Mrs. Reed talked to each employee on the list personally; and as each employee agreed to return the set amount to the company, she initialed by the employee’s name on the list. Indeed, according to one employee, Mrs. Reed said, “I will give you a larger bonus if you agree to give some of it back.” Further, another employee told us that when she did not return the bonus money immediately, she received a telephone call from Mrs. Reed asking, “Where’s my money?” When the employee answered that she had thought the donation was voluntary, Mrs. Reed responded, “That was never your money in the first place. I want my money.” The employee told us that when she returned her bonus in the form of four checks, asking (for personal financial reasons) that each be deposited at a later date, Mrs. Reed deposited all of them immediately. Other employees confirmed similar experiences. Our review of a “special employees” list, containing 38 employees’ names, showed that 35 had returned an average of 29 percent of their original bonuses and that the range of return from these employees was between 18 percent and 57 percent. We verified with Mid-Delta’s Director of Finance that the employees had returned the amounts and that Mid-Delta had claimed the full bonus amounts to Medicare for reimbursement. Table 1 lists details of bonus paybacks by some of the 35 employees. In contrast, although Mrs. Reed paid back part of her $125,000 bonus, her family members did not pay back any of their bonuses. In December 1996, Mrs. Reed’s husband received a $75,000 bonus and returned none; their daughter received a $55,000 bonus and returned none. Although Medicare allows a provider to pay reasonable bonuses, Mid-Delta Home Health’s Medicare intermediary was unaware that Mid-Delta employees were returning a portion of their bonus money to the company. The intermediary stated that Mid-Delta claims for the payroll-cost amounts were improper if Mid-Delta had received back part of the employees’ salaries. The intermediary also informed us that intermediaries look at an entire employee compensation package to determine if the costs claimed are reasonable and that it had not conducted a detailed audit of any Mid-Delta cost report. Moreover, cost reports, which home health agencies submit to their intermediary for Medicare reimbursement, do not break down employees’ total compensation by such components as base salary, bonuses, and leave. Therefore, the amounts claimed are not likely to be questioned without an audit. It is our opinion that Mid-Delta Home Health’s claims for Medicare reimbursement of the returned leave moneys were also not proper because Mid-Delta had not incurred the costs. In a similar case involving an unrelated home health agency, HCFA formally ruled that “contributions” returned to the provider in the form of deductions from employees’ salaries had reduced the provider’s costs and therefore had been improperly claimed for Medicare reimbursement. Following the provider’s appeal, the U.S. District Court for the Southern District of Mississippi upheld HCFA’s decision, concluding that, under Medicare regulations, the contributions qualified as refunds of salary, thus reducing the company’s salary expense. The court also noted that Medicare reimbursement was limited to costs incurred. The court in this case further determined that (1) the employee contributions created at least “a perception of impropriety” and (2) the home health agency had no safeguards in place to ensure that coercion was not involved. According to a former Mid-Delta Home Health management official and other former and current employees, Mrs. Reed told employees that their returned funds would support, in part, a Mid-Delta “indigent care fund.” Those employees who complied with the bonus payback, returned the money through personal checks or money orders made payable to the indigent care fund. Further, as previously stated, Mid-Delta Home Health officials deposited most of the employees’ returned unused-leave checks to the fund. Mrs. Reed told us that this fund was to assist in continuing the care of home health patients who needed it but who were no longer eligible for Medicare or Medicaid visits. However, according to one former Mid-Delta nurse, she was not paid at all for indigent-patient visits, much less from the indigent care fund. She questioned where the fund’s money was going if it was not used to pay for charity visits to indigent home health care patients. When we questioned Mrs. Reed about this, she responded that she tells the nurses, “If I don’t get paid, you don’t get paid.” Indeed, Mid-Delta’s controller told us that the indigent care fund was used to offset unpaid bills of patients of the company’s rural health clinics, Taylor’s Medical Clinics. In support of this statement, the controller provided us with records showing that approximately $418,000 in patients’ unpaid balances had been attributed to the “indigent pay” category for the 1994-96 period. Mrs. Reed told us, however, that she would transfer money from the indigent care fund account to the P&T Management operating account to alleviate cash flow problems or to cover payroll costs. Our review of the “indigent pay” category records showed that the unpaid bills belonged mostly to private-pay patients of Taylor’s Medical Clinics. Mid-Delta’s controller stated that the clinics’ charges were too high for most self-pay and private insurance patients whose insurance companies reimbursed the clinics only for “reasonable and customary charges.” She further stated that the fund was used to cover instances in which such patients did not pay the clinics’ full charges. We noted that among the patients listed in the records were several Mid-Delta employees; Mrs. Reed’s granddaughter; and Mrs. Reed’s daughter, who was Executive Vice President for Operations of P&T Management. Additional Mid-Delta Home Health payroll-cost issues resulted in either improper or questionable claims to and reimbursement by Medicare: Mid-Delta improperly claimed Medicare reimbursement for the total 7-month salary that Mrs. Reed’s daughter received while she attended school full-time and worked part-time. We question Mid-Delta’s (1) claiming $65,000 in bonuses to the daughter, which equated to about 119 percent of the daughter’s base salary and (2) claiming the payroll costs of “Community Education” staff who were marketing Mid-Delta and other affiliated operations. Finally, Mid-Delta purchased an employee’s business in part through a salary bonus to the employee that was later improperly claimed as a payroll cost and reimbursed as such by Medicare. Mrs. Reed’s daughter, Ms. Pamela Redd, attended nursing school full-time at a local community college from June to December 1996. At the same time, she held the job title of Executive Vice President for Operations at P&T Management, Inc. and received a full-time 1996 salary of approximately $54,660. An analysis of Ms. Redd’s employment time-and-attendance sheets showed that 53 percent of her 8-hour work day (from June to December 1996) was spent at school and related activities. Yet, according to Mid-Delta’s Director of Finance, Ms. Redd’s full-time salary was charged to Medicare for reimbursement. This was, in our opinion, an improper claim. According to the intermediary, Mid-Delta should not have been reimbursed for salary—approximately $16,900 by our calculation—incurred while Ms. Redd attended school. According to Mrs. Reed and Ms. Redd, Ms. Redd was not the only employee attending school full-time; however, Ms. Redd was the only employee being paid a full-time salary for the time spent in school. We learned that in addition to her approximately $54,660 base salary, Ms. Redd received two bonuses totaling $65,000 in 1996, equal to approximately 119 percent of her base salary. This was reflected in Ms. Redd’s 1996 W-2 form, which showed that she had been paid almost $122,000. When we asked Ms. Redd about the amount of the bonuses in relation to her base salary, she did not explain why she had received the large bonuses. However, the Mid-Delta controller stated that in addition to using various company criteria (e.g., length of employment and annual salary), Mrs. Reed determined bonus amounts largely at her discretion. According to Mid-Delta’s Director of Finance, Ms. Redd’s payroll costs, including the bonuses, were claimed to Medicare for reimbursement. In our opinion and that of the intermediary, Mid-Delta’s claim to Medicare for Ms. Redd’s 1996 bonuses was questionable because of the disparity between her base salary and the bonus amounts and because she was not working full-time in 1996. Under Medicare cost reimbursement principles, all payments to providers of services must be based on the reasonable cost of services covered under Medicare and related to patients’ care. Although Medicare reimbursement is available for expenses associated with educating the community on home health care, it is not available for the expenses of promoting and marketing home health care services in order to increase patient utilization of a provider’s facility. We noted the unavailability of Medicare reimbursement for marketing activities for this purpose in our 1995 report regarding another home health care agency. In the intermediary’s review of Mid-Delta’s 1993 and 1994 cost reports, it noted that it had disallowed various expenses, in part, because they were related to marketing functions. These disallowed expenses included the purchase of, among others things, radio and television advertisements; 1,100 fund-raising cookbooks; and an exhibit booth to recruit staff at a physicians convention. However, according to company records and knowledgeable former P&T Management employees, Community Education staff primarily promoted and marketed Mid-Delta Home Health and Taylor’s Medical Clinic services to other providers and the public. Mid-Delta’s Director of Finance also confirmed that Medicare reimbursed the salaries of the Community Education employees. Further, according to Community Education staff, Mrs. Reed changed receipts and documents for marketing-related activities to reflect that the activities were associated with Community Education and were therefore Medicare-reimbursable. For example, in December 1995, Mrs. Reed told staff to purchase about $4,000 in Christmas gifts for physicians. When an employee noted “Gift items for referral sources” on the receipt, Mrs. Reed changed the receipt to show that the gifts were for employees, which could be Medicare reimbursable. We question the propriety of Mid-Delta’s submitting payroll and other costs related to marketing activities for Medicare reimbursement because the costs involved marketing and promoting the company. Minutes from staff and other meetings in December 1996, January 1997, and May 1997 noted that the Community Education staff continued to market Mid-Delta services to schools, nursing homes, and hospitals. For example, December 1996 minutes noted that staff had met with a physician “about referring patients to the agency” who had diabetes and that cards had been placed in waiting rooms “of physicians who indicated that they would refer patients to us .” January 1997 minutes noted that the Community Education staff had “sold contracts to nursing homes and other providers; . . . marketed psych services to physicians in Yazoo City area. . . .” Minutes from May 1997 stated that by operating booths at various outside meetings, Community Education staff were “promoting the Center for Specialized Diabetic Foot Services” and that the Community Education department would help to market Mid-Delta Home Health’s cardiac program. Indeed, according to a former P&T Management vice president, “Community Education is a euphemism for marketing.” Further, according to former Community Education managers, the primary responsibilities of Community Education staff were to promote and market on behalf of Mid-Delta and Taylor’s Medical Clinics. In discussions with us, a former manager said that the duties of P&T Management’s Community Education staff were “for the purpose of developing business” for Mid-Delta Home Health and Taylor’s Medical Clinics, generating physician referrals, and attracting managed care contracts and for other sales functions. According to a former Mid-Delta employee, Mrs. Reed used the bonus system as a means, in part, to purchase a business and be reimbursed by Medicare. We learned that Mrs. Reed had purchased a business called Warren’s Children’s Services for $125,000. This business provided services under Medicaid’s Early and Periodic Screening, Diagnosis, and Treatment (EPSDT) program for children from birth to age 18 years. In February 1995, Mrs. Reed hired Ms. Betty Martin, owner of Warren’s Children’s Services, as P&T Management’s Director of EPSDT at a $70,000-a-year salary. Ms. Martin was to educate the nursing staff on the EPSDT program. Mrs. Reed gave Ms. Martin a $25,000 check as a down payment for Warren’s Children’s Services in March 1995 and a second check for $25,000 in December 1995. (See table 2.) A year later, in December 1996, Ms. Martin received a P&T Management bonus for about $12,000. However, Mrs. Reed told Ms. Martin that $10,000 of the bonus was partial payment for Ms. Martin’s business. Ms. Martin stated to us that she was concerned because P&T Management withheld taxes from bonuses. We determined that the $10,000 portion of the bonus had been claimed improperly as part of Ms. Martin’s payroll costs. In June 1997, according to Ms. Martin, she received two more checks for $5,000 each, in partial payment for the business. One check was presented as an advance bonus; and the other, as a salary advancement, or pay raise. Ms. Martin returned the checks to Mrs. Reed and demanded the remainder of the money owed her for the business. According to Ms. Martin, Mrs. Reed replied, “I’m not going to employ you and pay you [for the business] too.” Shortly thereafter, Ms. Martin left the company. Ms. Martin subsequently received a check for $35,000 with a note, signed by Mrs. Reed, that said, “Before July 17, 1997, I will pay the $30,000 I owe you.” We confirmed that Ms. Martin received an additional check for $30,000. When we asked Mrs. Reed about the payments to Ms. Martin, she confirmed that she had given Ms. Martin $10,000 in bonus as a payment toward the purchase of Warren’s Children’s Services. She also confirmed that taxes had been withheld from the bonus. After we had questioned Mrs. Reed about the matter, she talked with her controller and her Director of Finance. Mrs. Reed then informed us that the controller and the Director of Finance had determined that she still owed Ms. Martin $10,000 because the bonus should not have represented partial payment for the business. As of February 1998, Ms. Martin had not received the final $10,000 payment. Mid-Delta Home Health nurses and other professionals voiced concerns to us that Mid-Delta was providing Medicare-reimbursed home health care services to patients who, in their professional opinions, were ineligible for the services. In response, we visited and/or reviewed patient documents of 41 home health care patients. In this regard, the intermediary—whom we requested to also review patient documents—and we question the reasonableness and necessity of Mid-Delta services received by at least 34 percent of those patients. Our questions involve (1) Mid-Delta actions to ensure continued home health services to Medicare patients, (2) excessive home visits by Mid-Delta staff, and (3) the lack of documentation to justify home visits. The intermediary and we also question Mid-Delta’s provision of Medicare-reimbursable home health services to some apparently ineligible patients as they did not appear to meet HCFA’s requirement that their condition create an inability to leave home without “considerable, taxing effort.” After interviewing a number of Mid-Delta Home Health’s patients, patients’ friends, and relatives and evaluating the patients’ plans of care (HCFA Form 485) and other case material, we question the reasonableness and/or necessity of the Medicare-reimbursable home health care services provided to 14 of the 41 patients reviewed during our investigation. The intermediary stated that in these cases, the claim would not be allowed. For example, the intermediary and we noted that Mid-Delta was providing services that were not covered in the plans of care. The situations giving rise to these questionable Mid-Delta services included the following: Exaggerated severity of patient conditions in patient-care documents to ensure continued home health services. For example, the May-July 1997 plan of care for a patient, being seen for over 2 years for recurring seizures, stated that he had had a seizure in June 1997. However, the physician’s narrative report for that patient indicated that this was untrue—the patient had not had a seizure during the plan-of-care period. For another patient, the intermediary in its review of the patient’s plan of care noted that the Mid-Delta Home Health documentation “seem to exaggerate the patient’s condition.” Excessive use of skilled nursing visits. For example, a Mid-Delta patient had been seen for 5 years for hypertension-related conditions. For the June-August 1997 period, Mid-Delta nurses visited the patient twice a week for these conditions. However, the intermediary noted that the patient’s condition as noted in the plan of care showed the necessity for only one visit a month. In addition, the June-August 1997 plan of care for a diabetic patient with hypertension ordered weekly skilled nursing visits for these conditions. However, the intermediary noted in the review of the patient’s plan of care that the patient needed only monthly skilled nursing visits for bloodwork. Weekly visits were not reasonable and necessary. Lack of documentation in plans of care to justify the need for home health services. The plan of care for a diabetic Mid-Delta patient stated that the patient was unable to fill his syringes accurately, necessitating skilled nursing visits. However, the intermediary could find no documentation to support a reason for the patient’s inability. In addition, another patient, having been visited for 6 years, was prescribed a new drug in late April 1997, necessitating twice-a-week visits for 4 weeks. However, the intermediary’s review noted that the patient’s June-August 1997 plan of care still called the medication “new.” The plan of care, according to the intermediary’s review, included no documentation to indicate the need for continued skilled nursing visits. Mid-Delta nurses; the intermediary, after a preliminary review of patient data; and we concluded that Mid-Delta was providing services to patients whose eligibility was questionable. Some of the Mid-Delta Home Health patients we visited or whose cases we reviewed did not appear to meet HCFA requirements that they be homebound. According to patient interviews and our observations, the efforts that the patients needed to leave home were neither considerable nor taxing. Yet, Mid-Delta provided them Medicare-reimbursable home health care services. For example, one elderly Mid-Delta Home Health patient was in his yard moving a 5-foot section of a telephone pole when we visited. The patient’s actions contradicted Mid-Delta’s patient records, relied on by the intermediary for eligibility determinations, that indicated that the patient had poor endurance, ambulated with a cane, and appeared homebound. Another Mid-Delta patient, under home health care for about 2 years, received skilled nursing visits twice a week to monitor her blood pressure and a heart condition. However, when we visited her, she was conducting a child care service in her home with four children, aged approximately 2 to 5 years. The intermediary stated, when we asked about this situation, that such activity meant that the Mid-Delta patient was most likely ineligible for home health care. A third homebound patient told us that he regularly walked 2 to 3 miles a day. Some other patients in our investigation also left their homes on a regular basis—whether walking or driving—for such activities as visits to a neighbor, store, bank, or post office. With regard to issues of home health care eligibility and services, we have reported in the past that few Medicare home health claims are subject to medical review, Medicare beneficiaries are rarely visited by fiscal intermediaries, and the physicians of record have limited involvement in home health care. Indeed, our 1995 report, Medicare: Allegations Against ABC Home Health (GAO/OSI-95-17), discussed questionable activities regarding the ABC Home Health Agency that were similar to those in our investigation of Mid-Delta Home Health. We conducted our investigation during 1997, following up allegations made by former and current employees of P&T Management and Mid-Delta Home Health. Our inquiry covered those organizations’ participation in the Medicare home health program. We reviewed applicable laws and regulations, HCFA directives, and documents presented by these organizations and by their former and current employees. We also reviewed Mid-Delta Home Health patient files and cost records, cost reports submitted to the Medicare intermediary, and those provided by the organizations’ accountant and controller. Records/documents reviewed fell primarily between January 1, 1993—the first year that the intermediary audited the Mid-Delta Home Health cost report—and December 31, 1996. In addition, we reviewed court documents cited in this report and various other records provided by the intermediary, P&T Management, Mid-Delta Home Health, Mid-Delta Development League, and other affiliated companies. We interviewed over 67 current and former employees of P&T Management and Mid-Delta Home Health and met with state regulatory officials at selected locations in Mississippi. We also interviewed a number of Mid-Delta Home Health patients, their relatives, and their friends and visited some Mid-Delta patients at their residences. In addition, we met with intermediary officials, investigators, and regulatory officials at HCFA in Florida and Maryland. As arranged with your offices, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to interested congressional committees; the Secretary of Health and Human Services; the Inspector General, Department of Health and Human Services; and other officials of the Department. Copies will also be made available to other interested parties on request. If you have any questions regarding this investigation, please contact me on (202) 512-7455 or Assistant Director Barney Gomez of my staff on (202) 512-6722. Major contributors to this report are listed in appendix I. Medicare Home Health: Success of Balanced Budget Act Cost Controls Depends on Effective and Timely Implementation (GAO/T-HEHS-98-41, Oct. 29, 1997). Medicare Home Health Agencies: Certification Process Is Ineffective in Excluding Problem Agencies (GAO/T-HEHS-97-180, July 28, 1997). Medicare: Need to Hold Home Health Agencies More Accountable for Inappropriate Billings (GAO/HEHS-97-108, June 13, 1997). Medicare Post-Acute Care: Cost Growth and Proposals to Manage It Through Prospective Payment and Other Controls (GAO/T-HEHS-97-106, Apr. 9, 1997). Medicare: Home Health Cost Growth and Administration’s Proposal for Prospective Payment (GAO/T-HEHS-97-92, Mar. 5, 1997). Medicare Post-Acute Care: Home Health and Skilled Nursing Facility Cost Growth and Proposals for Prospective Payment (GAO/T-HEHS-97-90, Mar. 4, 1997). Medicare: Home Health Utilization Expands While Program Controls Deteriorate (GAO/HEHS-96-16, Mar. 27, 1996). Medicare: Allegations Against ABC Home Health Care (GAO/OSI-95-17, July 19, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO investigated allegations of Medicare improprieties by home health care provider Mid-Delta Home Health of Belzoni, Mississippi, and affiliated companies, focusing on allegations that Mid-Delta: (1) routinely requested and received leave or bonuses back from its employees while charging Medicare their full amount; (2) paid the owner's daughter a full-time salary and charged it to Medicare although she was a full-time nursing student; and (3) conducted unnecessary and excessive home health care patient visits. GAO noted that: (1) Medicare, through the intermediary, reimbursed Mid-Delta Home Health for payroll costs between January 1993 and December 1996 that, in GAO's opinion, were improperly claimed because they did not represent actual costs to the provider; (2) specifically, the owner of the company, Clara T. Reed, regularly asked employees to return to the company the cash value of unused leave and about 20 percent or more of bonuses received; (3) the employees were told that the returned money was needed for, among other things, a Mid-Delta Home Health-sponsored indigent care fund; (4) however, rather than use the fund to provide home health care to those who could not afford it, Mid-Delta officials stated that the money was used to offset unpaid bills of private-pay patients of Mid-Delta's affiliated rural health clinics; (5) Mid-Delta Home Health also improperly claimed and was reimbursed by Medicare for other costs that did not meet Medicare cost reimbursement principles since they were not related to patient care; (6) one example involved salary paid to the owner's daughter as a P&T Management executive vice president for over half of 1996 while she attended school full-time; (7) further, GAO questions the reasonableness of the daughter's $65,000 in 1996 bonuses claimed by Mid-Delta for Medicare reimbursement; (8) in addition, Mid-Delta was reimbursed by Medicare for the payroll costs of some P&T Management employees whose positions appeared to focus on marketing activities; (9) GAO questions the propriety of these claims because Medicare does not reimburse providers for marketing costs used to increase patient utilization of the provider's facilities; (10) in another payroll-cost matter, Mrs. Reed purchased a business from a third party, hired that individual to work for P&T Management, and gave the individual a $10,000 bonus that was considered partial payment of the purchase price; (11) Mid-Delta then improperly claimed the bonus as part of its payroll costs and was reimbursed by Medicare for this payment; (12) the purchase of a business does not qualify as a payroll cost; and morever, Medicare does not reimburse providers for the cost of purchasing a business; (13) as alleged by current and former Mid-Delta Home Health nurses, Mid-Delta staff visited individual Medicare beneficiaries whose eligibility or need for the visits was questionable; and (14) GAO visited or reviewed case files for 41 of the patients identified by the nurses and determined that for at least 14, or 34 percent, of the patients, eligibility for Medicare-reimbursed services was questionable.
As the Internal Revenue Service (IRS) replaces its outdated computer and telecommunications systems, it is also overhauling the way it is organized, staffed, and operated. These changes are part of a new business vision designed to take advantage of new capabilities as IRS moves toward a paperless electronic environment. As these changes are phased in over the next several years, thousands of employees could be displaced as their jobs are eliminated or redesigned. IRS pledged in 1990 that employees displaced by modernization would be given the opportunity for retraining that would allow them to maintain their employment at the same grade. To help keep this pledge while also meeting the job requirements of the new environment, IRS negotiated standard redeployment policies and procedures in a November 1993 Redeployment Understanding with the National Treasury Employees Union (NTEU). According to the Understanding and IRS officials, the goal of the redeployment process was to move employees out of positions that would not continue in the modernized environment and into positions—new, redesigned, or existing—that would be needed in the new environment. At that time, IRS planned to meet its changing job requirements largely through redeployment. Because of funding reductions in fiscal year 1996 and expectations of reduced funding levels in fiscal year 1997, however, IRS decided that it could no longer guarantee that employees would be given the opportunity to transfer into new jobs within the agency. Thus, after we had completed our audit work, IRS terminated the Redeployment Understanding and began planning for a near-term reduction-in-force. Because it is important that IRS’ workforce have the knowledge, skills, and abilities needed for the new environment, we reviewed, under our basic legislative authority, IRS’ initial use of the procedures established through the Redeployment Understanding. Although those procedures have since been terminated, IRS’ experiences in implementing them provide useful information for developing any future redeployment procedures. In January 1994, IRS had about 131,000 employees in a National Office, 7 regional offices, 63 district offices, 10 service centers, 2 computing centers, and 1 compliance center (appendix I has a detailed breakdown by type of employee). District operations included hundreds of local posts of duty, 34 locations that housed taxpayer service and collection call sites,and 3 forms distribution centers. As part of its modernization, IRS, in 1995 and 1996, reduced the number of regions from 7 to 4 and consolidated the number of districts from 63 to 33. IRS is also consolidating various support functions that were decentralized in as many as 84 separate organizations. For example, most of the staff support for basic resources management functions, such as personnel, facilities management, and training, is being consolidated into 21 host locations. Similarly, information systems jobs, such as computer programmers and operators at service centers and district offices, are to be consolidated into a yet-to-be determined number of field information systems offices. The restructuring of IRS’ service centers, which accounted for about 39 percent of its workforce in January 1994, is a major component of IRS’ new business vision. Currently all 10 service centers process tax returns and other documents and have various forms of non face-to-face interaction with taxpayers. IRS’ plan, as of February 1996, was to have (1) all 10 centers function as customer-service sites, (2) at least 5 of the 10 centers function as submission processing centers, and (3) 1 of the 5 submission processing centers also serve as IRS’ third computing center. IRS is also changing where and how it provides customer service. Until 1994, customer service was provided at the 10 service centers, the 34 locations that housed ACS and/or TPS sites, and the 3 forms distribution centers. Under the new business vision, customer service is to be provided at only 23 locations—the 10 service centers and 13 other locations. Besides absorbing the functions and workloads of TPS and ACS sites and forms distribution centers, customer-service sites are to also absorb and attempt to convert, to the telephone, some work now done by correspondence in various service center branches, such as collections, adjustments, and taxpayer relations. In December 1993, IRS estimated that these business vision changes would eliminate more than 19,600 service center jobs and more than 4,600 district office jobs. In addition, the consolidation of regions and districts was expected to displace over 1,100 managers and support staff. To maintain employee morale and cooperation during the transition to its new environment, IRS pledged, in a 1990 policy statement, that career and career-conditional employees would be given the opportunity for retraining that would allow them to maintain employment at their current grade. This pledge did not apply to temporary and term employees. IRS officials believed that attrition, the use of term employees for jobs being phased out, and the need to fill the additional customer service and compliance jobs authorized by Congress as part of IRS’ fiscal year 1995 appropriation, would enable IRS to meet new job requirements and keep its job protection pledge. Under this workforce transition strategy, displaced employees would have to be redeployed to new or redesigned jobs that would generally require greater technical knowledge and communication skills than are needed for their current jobs. Responding to a series of reports citing the need for sound human resource planning as IRS implements its new business vision, IRS has done much to prepare for the redeployment of employees whose jobs are expected to be redesigned or eliminated. Over the past 3 years, IRS has (1) developed various models for projecting and comparing current and future workforce requirements, (2) established standard redeployment policies and procedures and a Redeployment Resolution Council in partnership with NTEU, and (3) developed site-specific plans for redeploying employees. Appendix II provides a brief overview of these efforts. In the November 1993 Redeployment Understanding, IRS and NTEU established, for the first time, standard procedures for the redeployment of bargaining-unit employees whose jobs would be redesigned or eliminated in the transition to the modernized environment. Before they established standard procedures, IRS and NTEU were negotiating the redeployment of displaced employees on a project-by-project basis. According to its general work contract with NTEU, IRS could involuntarily reassign employees whose jobs were abolished, but such reassignments were subject to negotiations. The new standard procedures generally required that vacancies for positions needed in the new environment be filled first through lateral reassignment of eligible volunteers, in order of their seniority, as defined by their time in federal service. If the number of volunteers was insufficient, IRS had the option of using involuntary reassignment of the least senior employees in the local area or using the normal IRS-wide competitive process to fill the remaining openings. While the Redeployment Understanding was a binding document, it could be reopened or terminated at any time by IRS or NTEU. As noted earlier, it was terminated effective August 23, 1996. Because IRS was still in the early stages of its planned overhaul at the time of our audit work, the large-scale employee displacement expected from the consolidation and modernization of the customer-service and submission processing functions had not yet occurred. Thus, with the exception of some displaced National and Regional Office staff, redeployment in fiscal years 1994 and 1995 was driven largely by the availability of positions into which employees in jobs not expected to be needed in the new environment could be redeployed. They included new, redesigned, or existing (vacant) positions that could be expected to continue in the modernized environment. According to National Office officials, redeployment in fiscal year 1994 was driven largely by the need to staff the first operational customer-service units and to fill vacancies created by attrition. Another factor driving redeployment in fiscal year 1995 was the reassignment of existing employees to over 4,300 new compliance and customer-service jobs authorized that year. We examined IRS’ early experience in redeploying employees to new jobs using the procedures established in the November 1993 Redeployment Understanding between IRS and NTEU. Our objective was to determine whether there were lessons to be learned from (1) IRS’ initial use of these procedures and their impact on IRS’ operations and (2) the reaction of redeployed employees and their supervisors to redeployment and the redeployment process. To address our objective we reviewed the November 1993 Redeployment Understanding and associated supplements and revisions and discussed redeployment policies and procedures with cognizant IRS and NTEU officials; reviewed site redeployment plans and discussed preliminary redeployment results at four IRS service centers (Atlanta; Brookhaven, NY; Cincinnati; and Fresno, CA) and four district offices (Atlanta, Baltimore, Cincinnati, and San Francisco); reviewed IRS reports on redeployment results, including related internal obtained and analyzed databases showing overall IRS staffing at three points in time—January 8, 1994; December 10, 1994; and January 6, 1996—in order to identify and monitor significant changes; and administered structured interviews, at the 8 locations we visited, to 188 employees who had been redeployed to new jobs, 30 supervisors who had gained a total of 346 redeployed employees, and 24 supervisors who had lost a total of 412 redeployed employees. The results of our interviews are not projectable to all IRS managers and employees. Appendix III contains information on how we selected the locations we visited and the persons we interviewed. Because IRS expected to make significant changes to its initial estimates of workforce requirements and the extent that employees would be redeployed to meet those requirements, we did not attempt to validate IRS’ workforce requirements and redeployment models or the output from those models. We conducted our review from June 1994 through July 1996 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Commissioner of Internal Revenue and the National President of NTEU, or their designees. We received written comments from IRS’ Chief, Management and Administration, on September 11, 1996, and from NTEU’s National President on September 17, 1996. Those comments are summarized and evaluated on pages 32 and 44 and are reprinted in appendixes V and VI, respectively. Because the new redeployment procedures made too many people eligible for redeployment too soon and precluded IRS from directing experienced people into new jobs, (1) many employees were redeployed years before the jobs they left were to be eliminated and (2) training requirements increased while productivity and customer service decreased. Service centers, particularly their returns processing functions, were most affected. To help cope with declining experience levels and higher error rates, processing divisions increased their use of overtime and temporary assignments (details). The processing divisions also ended up hiring more new career and career-conditional employees than they had lost through redeployment to sustain paper returns processing operations until delayed modernization efforts are implemented. Before the Redeployment Understanding was terminated, IRS and NTEU had worked together to change redeployment policies and procedures to make better use of employee experience, but they had not fully resolved these problems. The November 1993 Redeployment Understanding designated many IRS employees, including virtually all service center employees, as eligible for redeployment without regard to whether or when their jobs were to be eliminated. Consequently, many employees were redeployed too soon in order to fill new compliance and customer service positions. IRS had to hire several thousand new employees to replace experienced employees who left jobs in service center returns processing divisions. Furthermore, the Redeployment Understanding required IRS to fill positions with volunteers by seniority, rather than first allowing IRS to redirect experienced employees to new jobs requiring many of the same tasks as their current jobs. The resulting increase in training requirements and decline in productivity could have been minimized had the Redeployment Understanding (1) limited redeployment to those employees whose jobs were being eliminated and (2) allowed IRS to move employees who had the experience and skills needed for the new jobs. As part of IRS’ fiscal year 1995 appropriation, Congress authorized $405 million for IRS to hire the full-time equivalent of 6,238 employees. According to IRS officials, the new jobs were primarily compliance and customer-service jobs at service centers and district offices. While some of the new compliance and customer-service jobs were filled by employees whose National Office or regional office jobs had been eliminated, most were filled by service center and district office employees whose jobs were not in jeopardy of being eliminated for several years. According to IRS redeployment plans at the time, the displacement of large numbers of processing employees was not expected to begin until fiscal year 1997 or later, when IRS was to begin implementing its Document Processing System (DPS) and consolidating its paper processing operations into five service centers. As of September 30, 1995, IRS had filled 5,470 of these jobs—4,325 of them, or 79 percent, with existing employees; and the rest were filled through new hires. Many of the employees who transferred into those new jobs had to be replaced with less-experienced employees. As shown in appendix IV, IRS redeployed 1,182 and 1,872 career or career-conditional employees from its processing divisions to jobs elsewhere in IRS in 1994 and 1995, respectively. During the same years, the processing divisions hired 14 and 3,872 new, career or career-conditional employees, respectively. These new career and career-conditional employees are also eligible for redeployment. Although IRS hired mostly term employees in 1994, a National Office official told us that IRS had to hire new career-status employees in 1995 because term employees could not be used to sustain current processing operations long enough due to the 4-year limit on term employment. The Redeployment Understanding contributed to this sizeable turnover of service center staff by making almost all service center employees eligible for redeployment, since substantial operational changes were planned for the service centers. According to our analysis of IRS staffing data, of 50,580 service center employees on IRS’ rolls in January 1994, 47,317 were designated as eligible for redeployment. The only exceptions were 2,796 term and temporary employees and 467 Criminal Investigation Division employees. According to IRS National Office officials, NTEU would not agree to limit redeployment eligibility to employees in specific jobs because IRS had not finalized the types and numbers of positions needed for its new environment. Also, according to an NTEU official, NTEU presumed that all service center jobs would be affected and that IRS should not offer available jobs only to displaced employees. Because, unlike at the service centers, many district office positions are expected to continue in the modernized environment, the Redeployment Understanding generally limited redeployment eligibility in district offices to employees at closing ACS and TPS sites, resources management support services employees, and some information systems employees. Although redeployment eligibility at district offices was more restricted than at service centers, many of the eligible district office employees were designated as eligible long before their jobs were scheduled for elimination. For example, among the district employees designated as redeployment eligible in November 1993, there were about 5,500 employees at 29 ACS and TPS sites that were scheduled to close. At that time, however, 27 of these 29 sites were not scheduled to close until October 1999. The other two sites were scheduled to close in October 1996 and October 1997, respectively. According to National Office officials, however, 11 of the 29 ACS and TPS sites were closed earlier than expected because they experienced “high attrition.” Specifically, nine sites that were scheduled to close in 1999 and the two sites that were scheduled to close by 1997 were closed 2 to 5 years early—between 1994 and 1996. Also, as of September 1996, eight other sites that were scheduled to close in 1999 were rescheduled to close sooner—from 1996 through 1998. However, 4 of the remaining 10 sites originally scheduled to close in 1999 are now scheduled to close between 2000 and 2002. On the basis of our analyses of staffing and reassignment data provided by IRS, we believe that the early closure of some sites and the changes to scheduled closure dates for other sites occurred, at least in part, because employees who had been declared redeployment eligible in November 1993 were redeployed earlier than expected. For example, staffing data for the Brooklyn TPS site showed that of the 240 employees who were on the site’s rolls on January 8, 1994, 105 had been reassigned to other jobs; 3 had been assigned to other TPS sites as of December 10, 1994; and 18 were no longer employed with IRS. At least 76 of the 108 reassigned employees were reassigned before the office was closed in October 1994. Employees at four ACS and TPS sites were also designated as eligible even though their jobs were to be merged with customer-service centers in the same local area. As a result, many of these employees were redeployed out of the closing sites while other career or career-conditional employees were hired or redeployed into the closing sites. Our structured interviews of 24 service center and district office supervisors who lost redeployed employees provided further evidence of premature redeployment. According to the supervisors, who reported losing 412 employees, none of the positions vacated by those employees had been eliminated. The supervisors said that IRS planned to fill 350 (85 percent) when funding became available, leaving 62 (15 percent) to be eliminated. Many of the service center officials and supervisors we interviewed expressed the belief that too many employees were designated as eligible for redeployment. For example, the Chief of the Collections Branch at one center said that “blanket redeployment ” for the entire service center was a “mistake.” A Processing Division Chief at another center said that redeployment eligibility should be limited to displaced employees and should not include employees who will not be displaced for many years, such as those in the Processing Division. A supervisor of a section at that center who had gained redeployed employees said it had been a “costly transition” because “all employees were considered redeployment eligible even if their job had not been abolished.” With some exceptions, such as hardships and placement actions resulting from a grievance, the November 1993 Redeployment Understanding generally required that vacancies for bargaining-unit positions that would be needed in the new environment were to be filled as follows: Lateral reassignment (or change to lower grade), based on seniority, of eligible volunteers (1) from within the local commuting area and then, if the number of volunteers was insufficient, (2) from outside the local commuting area. When the number of volunteers for lateral reassignment (or change to lower grade) was insufficient, IRS could consider making directed (involuntary) reassignments, by inverse seniority, from eligible employees within the local commuting area. When the number of volunteers for lateral reassignment (or change to lower grade) was insufficient and IRS did not use the directed reassignment process, IRS could fill the vacancy through IRS-wide competition. When there were no redeployment-eligible internal applicants, IRS could fill vacancies for jobs that would be continued in the new environment with external hires. Vacancies for noncontinuing jobs were to be filled by temporary or term appointments. In October 1994, IRS and NTEU made an exception to the Redeployment Understanding to allow district customer-service sites to staff their new units with volunteers from closing ACS and TPS sites, before using the established redeployment process, since staff in those sites would already have experience in resolving taxpayer account matters via the telephone. At the same time, four service centers (Andover, Atlanta, Cincinnati, and Philadelphia) were authorized to fill up to 30 percent of their new customer-service positions with volunteers from ACS and TPS sites that were closing in nearby districts (Boston, Atlanta, Cincinnati, and Philadelphia). The Cincinnati Service Center requested this exception in order to optimize the mix of experience needed to begin its new customer-service operations. Except for certain resources management employees, no other exceptions were made to take advantage of service center employee experience. Thus, the redeployment procedures did not give service centers a viable opportunity to redirect experienced employees to related new or redesigned jobs before seeking volunteers from unrelated jobs. The Redeployment Understanding technically allows IRS to make directed reassignments before using the competitive process; however, using this option was not practical because it required that directed assignments be made in inverse seniority order from within the entire local commuting area. This provision would mean that a center’s newest employee (and least likely to have related experience) must be the first one directed to fill a vacancy. The redeployment of employees into new customer-service units illustrates how these procedures limited IRS’ ability to reinvest experience. IRS’ customer-service workload migration plans called for the phased transfer of related work, workers, and funding, concurrently, from the district and service center sites currently doing the work to the new customer-service units. In that regard, IRS had directed 137 staff from related areas into the customer-service prototype unit at the Fresno Service Center before the Redeployment Understanding took effect. However, staffing of subsequent customer-service vacancies at Fresno and customer-service units established at other centers was subject to the Redeployment Understanding. In the service centers, related work includes that being handled through correspondence by employees in the Adjustments, Taxpayer Relations, and Collections branches. For example, employees in the Adjustments Branch generally correspond with taxpayers to resolve account-related problems and make necessary adjustments to taxpayer accounts using the Integrated Data Retrieval System (IDRS). Employees in the customer-service units being phased in at service centers generally do the same type of work, except that they communicate with taxpayers primarily by telephone rather than correspondence. Thus, experienced Adjustments Branch employees might need training in telephone techniques but would need little or no additional training in how to resolve account-related problems or how to adjust accounts using IDRS. However, employees redeployed by seniority or the competitive process may come from areas such as the Processing Division, where they worked as mail handlers, data transcribers, or in other jobs totally unrelated to the kind of work they would be expected to do in the customer-service units. These employees would require significant training not only in telephone techniques but also in resolving account-related problems and using IDRS. Even in a well-timed and properly targeted redeployment, some temporary increase in training and decline in productivity and customer service can be expected as an inherent consequence. At a minimum, the redeployment of employees increases training requirements and decreases productivity and service because neither the experienced employees serving as instructors nor the trainees are actively contributing to the organization’s business while they are involved in classroom training. Nor are they contributing fully during on-the-job (OJT) training. Because of other variables affecting productivity (such as new or increased workloads, equipment failure, etc.), it is difficult to quantify the degree of productivity decline specifically attributable to redeployment, much less the portion that was inherent versus that which was avoidable. Nevertheless, we think it is reasonable to assume that the redeployment procedures, by making too many employees eligible for redeployment too soon and by limiting IRS’ ability to take full advantage of employees’ job experience, resulted in a greater level of inexperience than might have otherwise been the case and thus led to more training, less productivity, and less service to taxpayers. Although training requirements increased due to redeployment that occurred in fiscal year 1994, they increased substantially in fiscal year 1995 due to the availability of several thousand additional compliance jobs authorized that year. As stated earlier, IRS redeployed existing employees to fill 4,325 (or 79 percent) of the 5,470 additional compliance jobs authorized for fiscal year 1995. Because these jobs were filled by redeployment-eligible employees whose vacated positions, such as those in processing or customer service, also had to be filled and the persons filling them had to be trained, training often occurred two or more timesin order to fill one new job. Service center and district officials and supervisors expressed concern about this increase in training requirements. For example, one service center official said that redeployment had a big effect on training, and that training costs had increased over $240,000, or 34 percent, during the first 6 months of 1995 from the same period in 1994. “The Center expended 92,086 more training hours over the same period of time in [fiscal year] 95 than 94. Compliance Division accounted for 57,799 of these hours, due to the hiring initiative, while Processing Division accounted for an additional 31,040.” According to data provided by the Center, the 92,086 additional hours was an increase of 24 percent over the 377,442 hours used in fiscal year 1994. A compliance division manager at a third service center said that his division had exceeded its fiscal year 1995 training allotment by over 14,000 hours, or 62 percent. Similarly, although fewer district employees were eligible for redeployment than at service centers, an official at one district said that because TPS and ACS work in that district could not be absorbed at sites in other districts, vacancies had to be filled with new temporary and term employees, which created concerns about quality and additional training costs. Many of the supervisors we interviewed also said that redeploying experienced employees out of their units and/or inexperienced employees into their units increased their training requirements. We interviewed 30 supervisors (hereafter referred to as “gaining supervisors”) who had, altogether, received 346 redeployed employees and 24 supervisors (hereafter referred to as “losing supervisors”) who had lost 412 employees to other units. As shown in figure 2.1, 20 (83 percent) of the 24 supervisors who lost employees and 20 (67 percent) of the 30 supervisors who gained employees said redeployment had increased training requirements in their units. According to many of the service center and district office officials and supervisors we interviewed, the increased training requirements also decreased the number of experienced employees on line—since these employees are often used as training instructors—thus further eroding unit productivity. We asked losing supervisors how the loss of employees through redeployment affected their unit’s productivity in terms of volume, accuracy, and timeliness. Their views varied. As shown in figure 2.2, of the 24 supervisors interviewed, 18 (75 percent) said that the volume of their unit’s output decreased, 9 (38 percent) said that the accuracy of their output decreased, and 9 (38 percent) said that the timeliness of their output decreased. Conversely, 6 (25 percent), 15 (62 percent), and 15 (62 percent) of the managers said that their units’ volume, accuracy, and timeliness, respectively, either had not been affected by the redeployment or had increased. The 30 gaining supervisors we interviewed also had mixed views on how the redeployment process affected their unit’s productivity. As shown in figure 2.3, decreased volume, accuracy, and timeliness were reported by 10 (33 percent), 7 (23 percent), and 11 (37 percent), respectively, of those supervisors. Conversely, 16 (54 percent), 18 (60 percent), and 15 (50 percent) of them said their units’ volume, accuracy, and timeliness, respectively, either had not been affected by the redeployment or had increased. Moreover, 4 of 16 gaining supervisors and 11 of 20 losing supervisors whose units normally used overtime said redeployment had increased their use of overtime. Similarly, 4 of 16 gaining supervisors and 8 of 18 losing supervisors whose units normally used temporary details from other units said that their use of details had also increased due to redeployment. We also asked supervisors whose employees were redeployed how this loss affected their unit’s service to taxpayers. Of the 24 supervisors, 10 (42 percent) said the loss of employees degraded their service to taxpayers. The degraded services mentioned most often included (1) taking longer to answer telephone calls and correspondence from taxpayers, (2) increases in the number of calls waiting and abandoned, and (3) growing backlogs of cases to be processed. Service center and district officials we interviewed also mentioned that productivity and taxpayer service had declined with the erosion of unit experience. For example: A customer-service branch chief at one center said that the branch was answering only 83 percent of its scheduled calls in June 1995, due to inexperienced employees and their requirement for training time, which had not been considered in developing the work schedule. A collections branch chief said that all the movement of employees associated with redeployment had reduced the branch’s timeliness in answering correspondence. In June 1995, the branch’s cumulative rate was 8.4 days over the 21-day standard. And, in some peak months, the rate rose as high as 40.1 days. “Redeployment losses have had a major impact on the Problem Resolution Program [PRP]. It is well known that PRP caseworkers do not become truly efficient for 2 - 3 years; the training curve is slow because of the difficulties of the cases. Many of the more experienced caseworkers were the first to be selected as compliance hires. Even though we replenish the staff, they continue to apply for redeployment positions. The result in Taxpayer Service was a reduction in PRP productivity in 1995 from .5 per hour (one of the highest rates in the country) to .2 per hour.” The most significant productivity declines may have been experienced within the service center processing divisions. Two internal IRS studies confirmed that the processing divisions lost productivity because employees who were experienced in processing returns were redeployed to compliance and customer-service jobs and replaced by inexperienced employees who were either newly hired or reassigned from other functional areas. According to a 1995 IRS study of service center productivity, redeployment hurt service center productivity by “encouraging pipeline employees to transfer out of the Processing Division.” According to the study, as illustrated in figure 2.4, the percentages of permanent employees transferring out of returns processing jobs in the 1995 filing season increased substantially from the prior filing season at 8 of the 10 service centers, and the increases were much larger at centers that have not been designated to continue as processing centers. The study also said that new processing employees were significantly less productive than experienced employees. It estimated that employees in their second filing season were 20 percent more productive than in their first. “During the 1995 filing season, processing functions in the service centers expended 40 percent more overtime hours than during 1994. In addition, the time expended by employees who were detailed-in from non-processing jobs increased by 19 percent in 1995.” “. . . Processing functions nationwide suffered a significant experience drain prior to the beginning of the 1995 filing season. Management indicated that between 1400 and 1800 employees had been moved from Processing Divisions to fill Customer Service and Compliance jobs . . .” The report explained that redeployed or newly hired replacements could not perform some processing steps at rates used to schedule the work. Before the Redeployment Understanding was terminated, IRS and NTEU had taken some actions designed to minimize the loss of employee experience during the transition to the new business vision. IRS and NTEU had also been discussing (1) whether to migrate related work, workers, and funding together into the new customer-service environment, and (2) the need to curtail personnel turnover and the resulting erosion of experience and productivity. According to a National Office official, as of June 1996, IRS was also validating skill assessment tools that it hoped to use in the redeployment process. In October 1994, the Redeployment Resolution Council withdrew the designation of resources management employees in grades GS-9 and above at host sites as redeployment eligible. Although the Council decided not to withdraw the designation of employees who were occupying positions that would, over time, be transformed into the new customer-service positions, it did restrict the lateral movement of employees out of these new positions after they had used their redeployment eligibility to move into them. In the meantime, the Council authorized district offices to staff their continuing customer-service sites with volunteers from closing ACS and TPS sites before using normal redeployment procedures. Similarly, as discussed earlier, the Council authorized four service centers to fill up to 30 percent of their new customer-service positions with volunteers from ACS and TPS sites that were closing in nearby districts instead of using normal redeployment procedures. Although only on a voluntary basis, these exceptions to the established redeployment process helped to minimize the loss of ACS and TPS employees who were experienced in performing customer-service functions. In April 1995, IRS customer-service officials were planning to request an exception to the redeployment process that would have allowed the phased migration of related work, workers, and funding, concurrently, into customer service, in accordance with customer-service workload migration plans. According to IRS officials, this exception request was never formally sent to the Redeployment Resolution Council. Instead, officials said the matter was informally discussed among IRS and NTEU council members. We were told in February 1996 that IRS and NTEU were still working informally on how best to deal with excessive turnover and experience loss IRS-wide, resulting from procedures specified in the Redeployment Understanding. Although it seems reasonable to expect some operational inefficiencies as an inherent part of any redeployment process, those inefficiencies were exacerbated at IRS, in our opinion, by redeployment procedures that made employees eligible for redeployment too soon and prevented IRS from redirecting employees to new jobs on the basis of their related work experiences. Redeployment occurred long before the expected large-scale displacement of employees associated with the implementation of planned modernization projects and consolidation efforts. Consequently, many of the jobs vacated by redeployed employees had to be filled again by newly hired employees. Thus, IRS’ first redeployment experience came too early to be very effective in achieving the goal of redeployment—which is to move employees out of jobs that would not be needed in the new environment and into jobs that would. Because employees experienced in certain areas were often redeployed to areas requiring very different skills and were, in turn, replaced by inexperienced staff, IRS lost valuable experience and in some instances incurred training cost twice, especially at its service centers. Before the Redeployment Understanding was terminated, IRS and NTEU had worked together to resolve a number of problems, but they had not yet agreed on using current job experience in making redeployment decisions. Unless future redeployments are structured in a way that allows IRS to redirect current employee experience and skills to jobs in the new environment, considerable experience could be lost during the transition, bringing about further increases in training costs and declines in productivity and customer service. For that same reason, it is also important that future redeployment be timed to coincide more closely with the implementation of modernization projects and consolidation efforts to better ensure that experienced employees are not vacating jobs long before those jobs are eliminated. We recommend that the Commissioner of Internal Revenue—should future redeployment procedures be developed—address the problems identified in this report, including limiting redeployment eligibility to employees whose current jobs have been or are about to be substantially altered or eliminated, so that redeployment of employees is timed closely with the implementation of modernization projects or consolidation efforts and allowing IRS to redirect employees who are currently and successfully performing existing jobs to redesigned jobs that are substantially the same before seeking volunteers from unrelated functions (similar to the exceptions made for district ACS and TPS employees). We requested comments on a draft of this report from the Commissioner of Internal Revenue and the National President of NTEU, or their designees. We received written comments from IRS’ Chief, Management and Administration, on September 11, 1996, and from NTEU’s National President on September 17, 1996. The written comments from IRS and NTEU are reprinted as appendixes V and VI, respectively. We also met with both parties, separately, on September 13, 1996, to discuss their comments. While agreeing that future redeployments should be better targeted and timed, IRS said that our discussion of the timing of past redeployments oversimplified the issue. According to IRS, it did, in retrospect, allow reassignments to occur too soon but that the result would have been different if IRS’ modernization plans had proceeded on the schedule envisioned when the Redeployment Understanding was signed. We do not agree. We considered IRS’ modernization plans and schedules in making our assessment, and we cited specific examples in our report where reassignments occurred well before sites were to be closed or implementation of a new system was to begin. IRS did not provide any information to contradict the scheduling information cited in our report and noted in its comments that the information in our report was generally factual. IRS also commented on the relationship between redeployment and the hiring initiative, under which Congress authorized thousands of new compliance and customer-service positions. According to IRS, that initiative provided an opportunity to redeploy many employees who were in noncontinuing positions and that if it “had not used these new positions for redeployment, and instead filled them with external hires, the number of employees still occupying non-continuing positions when the transition was scheduled to occur would have been much larger.” We recognize that the timing of the hiring initiative was partly responsible for increased training requirements and reduced productivity, since over 4,000 additional jobs were made available to redeployment-eligible employees in fiscal year 1995—well before large-scale employee displacement was expected. Nevertheless, we still believe that IRS would have experienced less disruption in fiscal years 1994 and 1995 had redeployment procedures focused on finding new jobs for employees as their displacement became imminent and allowed IRS to redeploy employees with related experience before those without such experience. More importantly, we believe that the lessons learned from IRS’ early redeployment experience will help it establish procedures aimed at minimizing disruption in the future, when there is no guarantee of additional hiring initiatives. In its comments on our draft report, NTEU said that the report is “flawed in its design, particularly with regard to its first stated objective, and that it fails to present any data to support the majority of the conclusions that are reached.” As an example, NTEU cited our conclusion that the redeployment procedures led to premature reassignments and operational inefficiencies. We disagree. Our conclusion about premature reassignments was based on an analysis of staffing and reassignment data for IRS service centers and for ACS and TPS sites; discussions with IRS officials and with service center and district office supervisors who lost redeployed employees; and reviews of IRS’ modernization and site closure plans. In our opinion, the results of that work, which are discussed on pages 16 to 21, provide a sufficient basis for concluding that the redeployment procedures led to premature reassignments. We reached our conclusion about operational inefficiencies after interviewing officials and supervisors in many of the affected organizational units and reviewing various documentation including several internal IRS reports and studies. Again, we believe that the results of our work, which are discussed on pages 21 to 30, provide sufficient data to support our conclusion. NTEU also said that to draw such a conclusion we would need to present some comparative analysis of the operational impact of the Redeployment Understanding versus some alternative selection procedure, such as the traditional competitive selection process. We did not intend to suggest in our report that IRS should have used the traditional competitive process in lieu of the redeployment procedures. That process, like the lateral redeployment process, can also result in the selection of employees without related experience, since a key factor in ranking employees is the appraised performance in their current jobs, which may not be related to the jobs being filled. Conversely, we also did not intend to suggest that IRS should be precluded from using competitive procedures in filling its new jobs. Such procedures would have to be used when redeploying employees to new jobs having higher career ladders than their current jobs. They might also have to be used when the number of employees with related experience or skills is less than the number of new positions. What we are suggesting is that the redeployment should have been more focused and better timed. While we acknowledge in the report that some operational inefficiencies can be expected with any redeployment process, we believe that the process would have been more efficient if the procedures were structured to (1) allow management to give priority to employees occupying positions that were closely related to the types of positions being filled and (2) time employee eligibility more closely to the dissolution of their jobs. We did not do a comparison of the operational impact of the Redeployment Understanding versus a redesigned redeployment that would have been more focused and better timed because it would have been highly speculative on our part to have attempted to quantify what the results would have been if IRS had used different redeployment procedures. Nevertheless, we think it is reasonable to assume that the Redeployment Understanding, by making too many employees eligible for redeployment too soon and by limiting IRS’ abililty to take full advantage of employees’ job experience, resulted in a greater level of inexperience than might have otherwise been the case and thus led to more training, less productivity, and less service to taxpayers. NTEU suggested that our conclusions were based on an “erroneous assumption that the IRS could have simply reassigned, either voluntarily or involuntarily, its most qualified and most experienced employees” into the new compliance and customer-service jobs “without any further consideration and without any negative impact on processing division productivity.” NTEU said that such an assumption was incorrect because (1) involuntary reassignment has a “negative impact on employee morale, overall performance, and productivity;” (2) the requirement that an employee cannot be noncompetitively reassigned to a position having a higher career ladder than that of the employee’s current position greatly reduces the field of eligible employees outside of the processing division; and (3) we apparently assumed that IRS would not have had to backfill any of the vacancies created by filling the new compliance and customer-service jobs with employees who had related experience. We did not assume that IRS could reassign its most qualified and experienced employees without any negative impact on productivity. To the contrary, as noted earlier, we believe that some decrease in productivity can be expected even with a well-timed and properly targeted redeployment. We did not attempt to assess the relative effects of voluntary and involuntary reassignment on employee morale, performance, or productivity nor are we implying that all reassignments should be done on an involuntary basis. Under the procedures envisioned by our recommendation, IRS could try the voluntary process before using the involuntary process or the normal competitive process. If it became necessary to use the competitive process to fill certain jobs, IRS could narrow the areas of consideration to certain groups of employees (e.g., those within the local commuting area, those in immediate jeopardy of losing their jobs, or those with current and directly related experience or skills). In addition to minimum qualification requirements, IRS could also apply selective ranking factors requiring directly related experience. By contrast, the Redeployment Understanding required IRS to fill a new job with the most senior volunteer for lateral assignment, even if that volunteer had no related experience or skills. Thus, IRS was precluded from selecting a less-senior volunteer who had related experience or skills. We agree with NTEU that the number of employees eligible for lateral redeployment might not have been enough to fill all of the new compliance and customer-service jobs without some impact on the processing division. However, we believe that the impact would have been minimized if procedures had (1) made employees eligible for redeployment only when the event that was to displace them became a near-term reality and (2) allowed IRS, in filling jobs laterally, to give preference to employees who were in immediate jeopardy of being displaced from their current positions and who had related experience. If additional positions remained to be filled, we agree that IRS might have had to select some processing division employees. We did not assume that no vacated position would have to be backfilled. However, although the redeployment of employees with related experience before those without such experience could still require filling some of the jobs vacated by the experienced employees, we believe that the need to do this could be less, using the kind of procedures suggested in our recommendation. For example, those procedures would allow the concurrent and phased migration of customer-service-related work and workers as planned by IRS and as done in initially staffing the customer-service prototype at the Fresno Service Center. Thus, IRS would be transferring positions rather than creating vacancies. In summary, we are not suggesting that IRS should be precluded from staffing any new jobs using employees whose jobs are not in jeopardy or who are not the most experienced. What we are saying is that redeployment procedures should apply to employees who are expected to be displaced by the imminent implementation of modernization projects or reorganization efforts. They should also be structured to give preference to employees whose jobs are in immediate jeopardy and to those who have experience related to the jobs being filled. Instead, the procedures adopted by IRS and NTEU made virtually all service center employees eligible for redeployment, without regard to when their jobs were to be eliminated or redesigned and required IRS to fill new jobs at the same grade, using the most senior volunteers from both related and unrelated areas throughout the center before using other options, such as directed reassignments. Our responses to other comments made by IRS and NTEU can be found in appendixes V and VI. To obtain some input on redeployment from those most affected and to identify other issues that might warrant IRS’ attention in future redeployments, we interviewed some redeployed employees and some of their new supervisors. Those interviews identified some concerns relating to such things as training and the amount of redeployment information provided to employees; but they also indicated that employees were generally satisfied with their new jobs, and supervisors were generally satisfied with their new employees. While there is room for improvement, as evidenced by the interviews and the declining productivity discussed in chapter 2, the reactions of employees and supervisors were encouraging. Of the 30 supervisors we interviewed, the great majority were either very satisfied (11) or generally satisfied (15) with their new employees. The supervisors also said that 92 percent (or 320) of their 346 new employees were meeting established standards for a “fully successful” level of performance—the minimum acceptable level for performance appraisal purposes. While we recognize the limitations associated with self-reporting, we also asked employees about their performance. Of the 177 employees we interviewed who had received feedback, 155 (88 percent) said that they were performing at or above the “fully successful” level. Some of the 22 employees we interviewed who said they were performing below the fully successful level offered suggestions on what would help them improve their performance. The most frequently cited suggestions were more job knowledge, skills, or experience; more or better training; and consistent guidance and/or more feedback from their supervisors or managers. Some employees who were unable to perform successfully in their new positions had returned to their former positions. In one district, an official we interviewed who coordinated the redeployment at that site told us that 12 (8 percent) of 150 employees were reinstated in their old jobs after “failing to make the transition” to their new jobs. And, at least at that site, employees who returned to their old jobs were redesignated as eligible for redeployment. Some of the supervisors included in our sample said that employees who are unable to meet performance standards will be reassigned to other positions. At some sites, employees may be given an “opportunity period” of 1 year to improve their performance, at which time they may be reassigned. Of the 188 employees we interviewed, 70 percent (131 employees) were satisfied with their new jobs, as shown in figure 3.1. Commonly mentioned reasons for this satisfaction included (1) the type of work or the work environment, (2) the challenging or interesting nature of the work, and (3) the sense of teamwork among coworkers and managers in their new units. The reasons most frequently cited by the 17 percent (32 employees) who were dissatisfied included (1)inadequate training; (2) unrealistic productivity expectations, especially for employees with little or no related experience; and (3) stress and fatigue from the length of time spent on the telephone or at a computer terminal. As discussed in chapter 2, IRS field officials indicated that redeployment had increased their training requirements. The impact on training was also evident from our interviews of redeployed employees and the supervisors who gained redeployed employees. Nearly a fifth of the redeployed employees we interviewed required more training than that which is normally provided for their positions. Many of these employees lacked related experience. Furthermore, although the Redeployment Understanding authorized only one additional training opportunity for employees who are not successful the first time, some supervisors said that they were told to allow as many training opportunities or as much time as necessary. Almost all of the redeployed employees included in our sample received classroom and/or OJT training. However, many of them either had their formal training period extended or had to repeat some or all of the training segments. In that regard, 32 (17 percent) of the employees we interviewed said they received additional training. Similarly, the gaining supervisors we interviewed said that 47 (14 percent) of their redeployed employees required additional training. According to a report by IRS headquarters officials after visiting one district office, 6 of 25 redeployed employees training to be revenue agents in that district failed the 12-week second phase of OJT twice. The six employees were either returned to their former positions or transferred to other compliance jobs. A supervisor we interviewed at another site said that she had an employee who had been on OJT for almost 1 year, and that, before redeployment, he probably would not have been allowed more than 6 months of OJT. The supervisor was told that since the employee was obtained through redeployment, he would continue OJT “indefinitely.” Employees without related experience often required the most training. For example, employees at one service center who were training for the customer-service representative position were divided into three groups on the basis of their knowledge of tax law and the computer system used to adjust taxpayer accounts (i.e., IDRS). The group with the least amount of knowledge required almost twice as much training as the group with the most knowledge. At another service center, where employees were trained together to minimize costs, officials told us they saw a correlation between related or unrelated experience and performance in training. For example, according to training records, employees redeployed from unrelated areas, who comprised about half of the class, failed tests more than twice as often as employees redeployed from related areas. Additionally, more than half of the employees from unrelated areas were still receiving OJT nearly 4 months after completion of classroom training, while those from related areas had completed their OJT in as little as 1 month and, in no case, more than 3 months. Of the 30 gaining supervisors included in our sample, 18 said that previous experience was a factor in the amount of training needed by new employees. One supervisor said that three of his four redeployed employees were receiving almost twice as much OJT as they would have if they had the related experience needed to perform the work. “The requirement to select low-ranking redeployment eligibles from competitive certificates has had a negative impact on the Compliance functions as well as several employees. . . . Many of the employees would not have been selected under normal circumstances because of mediocre evaluations, or marginal interviews. This mandatory selection created ’false hopes’ for the employees—setting them up to fail. These mandatory selections have resulted in several class failures, exorbitant training expenditures, disgruntled employees who have had to return to ACS and TPS.” We asked employees to rate the adequacy of various aspects of the redeployment process including (1) the assistance—such as career counseling, skill assessments, and job placement services—IRS provided in helping them find new positions; (2) the information IRS provided to explain the redeployment process; and (3) training—both classroom and OJT. Of the 188 employees we interviewed, 95 said that they experienced problems in at least one of those areas. As shown in table 3.1, 44 of the 187 employees (24 percent) who responded to our question found the redeployment assistance inadequate, while 51 employees (27 percent) said they had no basis to comment on the adequacy of the assistance because they did not receive assistance. Of the 76 employees who cited specific inadequacies, most said they needed help in understanding the redeployment process, accessing job announcements, determining the qualifications required for jobs, and researching their available options. Of the 183 employees who responded to our question on the adequacy of redeployment information, 36 (20 percent) considered it inadequate. Of those who gave reasons, almost all said that IRS did not explain the redeployment process well enough for them to fully understand it. Employees wanted to know the policies and procedures so they could better determine what their options were. The third aspect of redeployment that many employees found inadequate was the quality of training. For example, 31 of 188 employees (16 percent) said that OJT was inadequate. The most frequently mentioned reasons were OJT instructors lacked either sufficient subject knowledge or the communication skills to be able to teach the practical application of the classroom instruction and too many employees were assigned to each instructor for employees to receive adequate attention to individual needs. Although such complaints may not be unique to a redeployment situation, we believe that they may have been partly the result of overextending training resources to respond to the increase in training requirements discussed in chapter 2. A slightly lower percentage of employees—29 of 188 (15 percent)—found classroom training inadequate. The most frequently mentioned reasons were that the amount of time allotted for classroom training was insufficient, particularly for those with little or no related experience and the subject coverage was inadequate or the training lacked a “hands-on” component for the related computer systems. As table 3.2 shows, 13 of the 30 gaining supervisors and 9 of the 24 losing supervisors said that they were dissatisfied with the way IRS handled the redeployment process. The gaining supervisors most frequently cited the following reasons for their dissatisfaction with the redeployment process: redeployment allowed movement by seniority (the amount of time the employee had been with the federal government) rather than by work experience, and thus some of the employees redeployed were unqualified for the positions and redeployment resulted in too much personnel turnover. The reasons cited by losing supervisors were similar to those cited by the gaining supervisors. They said that their units received inexperienced employees through redeployment; they lost experienced employees; and communication between IRS management and employees was poor. Overall, the results of our interviews of redeployed employees and their supervisors suggested that many employees can be successfully redeployed to meet new job requirements. While it may take some time for redeployed employees to become fully productive in their new jobs, the vast majority of the redeployed employees included in our sample, and many more who were represented by the supervisors we interviewed, were reportedly meeting new job performance standards for their experience levels, although some needed supplemental training. Our results also suggested some dissatisfaction with the information, assistance, and training provided as part of the redeployment process to better prepare employees for jobs in the modernized environment. Although most redeployed employees were satisfied with their new jobs, many were dissatisfied with the quality and availability of redeployment information, assistance, and training. These employees said that they needed a more consistent, thorough, and understandable explanation of the redeployment process and how and when their jobs would be affected. They also said that they needed (1) information on available assistance, training, and job vacancies; (2) job placement assistance, including help in determining the qualifications required by the new jobs; and (3) more and better qualified OJT instructors. We recommend that, as a part of managing any future redeployment effort, the Commissioner of Internal Revenue consider ways to improve management communications with employees concerning redeployment assistance, information, and training. In doing so, IRS might ask itself such things as whether it is providing information that clearly explains (1) redeployment policies and procedures; (2) which jobs are expected to be eliminated, continued, and redesigned and when; and (3) the nature and extent of available redeployment assistance. We requested comments on a draft of this report from the Commissioner of Internal Revenue and the National President of NTEU, or their designees. We received written comments from IRS’ Chief, Management and Administration, on September 11, 1996, and NTEU’s National President on September 17, 1996. We also met with both parties, separately, on September 13, 1996, to discuss their comments. Neither party raised any objection to our recommendation or to the factual content of this chapter.
GAO reviewed the Internal Revenue Service's (IRS) initial efforts to redeploy employees under the terms of the Redeployment Understanding, focusing on whether there were lessons to be learned from: (1) IRS' initial use of redeployment procedures and their impact on IRS' operations; and (2) the reaction of redeployed employees and their supervisors to redeployment and the redeployment process. GAO found that: (1) if IRS develops new redeployment procedures, there are several lessons to be learned from its initial redeployment experiences; (2) although redeployment was intended as a way to move employees out of jobs that would no longer be needed in IRS' modernized environment, it was initially used to move thousands of employees whose jobs were not in immediate jeopardy into new or existing positions that were expected to be needed in the new environment; (3) many jobs vacated by redeployed employees had to be filled by new employees, who may subsequently have to redeployed; (4) training requirements increased and productivity and taxpayer services declined as experienced employees were replaced by inexperienced employees; (5) although some operational inefficiencies, such as reduced productivity and increased training, can be expected as an inherent part of any redeployment process, the negotiated Redeployment Understanding exacerbated these inefficiencies because it generally made many IRS employees eligible for redeployment years before their jobs were expected to be eliminated, and did not allow IRS to fill jobs with employees who had related experience before bringing in volunteers from unrelated areas; (6) GAO's interviews of redeployed employees and supervisors pointed to other lessons that might be learned from IRS' initial redeployment efforts; and (7) most employees were generally satisfied with their new jobs, and supervisors were generally satisfied with their new employees, but many employees cited concerns about the information IRS provided to explain the redeployment process, the assistance IRS provided to help employees find jobs, and the training IRS provided.
SBA was established by the Small Business Act of 1953 to fulfill the role of several agencies that previously assisted small businesses affected by the Great Depression and, later, by wartime competition. SBA’s stated purpose is to promote small business development and entrepreneurship through business financing, government contracting, and technical assistance programs. In addition, SBA serves as a small business advocate, working with other federal agencies to, among other things, reduce regulatory burdens on small businesses. SBA also provides low-interest, long-term loans to individuals and businesses to assist them with disaster recovery through its Disaster Loan Program—the only form of SBA assistance not limited to small businesses. Homeowners, renters, businesses of all sizes, and nonprofit organizations can apply for physical disaster loans for permanent rebuilding and replacement of uninsured or underinsured disaster-damaged property. Small businesses can also apply for economic injury disaster loans to obtain working capital funds until normal operations resume after a disaster declaration. SBA’s Disaster Loan Program differs from the Federal Emergency Management Agency’s (FEMA) Individuals and Households Program (IHP). For example, a key element of SBA’s Disaster Loan Program is that the disaster victim must have repayment ability before a loan can be approved whereas FEMA makes grants under the IHP that do not have to be repaid. Further, FEMA grants are generally for minimal repairs and, unlike SBA disaster loans, are not designed to help restore the home to its predisaster condition. In January 2005, SBA began using DCMS to process all new disaster loan applications. SBA intended for DCMS to help it move toward a paperless processing environment by automating many of the functions staff members had performed manually under its previous system. These functions include both obtaining referral data from FEMA and credit bureau reports, as well as completing and submitting loss verification reports from remote locations. Our July 2006 report identified several significant limitations in DCMS’s capacity and other system and procurement deficiencies that likely contributed to the challenges that SBA faced in providing timely assistance to Gulf Coast hurricane victims as follows: First, due to limited capacity, the number of SBA staff who could access DCMS at any one time to process disaster loans was restricted. Without access to DCMS, the ability of SBA staff to process disaster loan applications in an expeditious manner was diminished. Second, SBA experienced instability with DCMS during the initial months following Hurricane Katrina, as users encountered multiple outages and slow response times in completing loan processing tasks. According to SBA officials, the longest period of time DCMS was unavailable to users due to an unscheduled outage was 1 business day. These unscheduled outages and other system-related issues slowed productivity and affected SBA’s ability to provide timely disaster assistance. Third, ineffective technical support and contractor oversight contributed to the DCMS instability that SBA staff initially encountered in using the system. Specifically, a DCMS contractor did not monitor the system as required or notify the agency of incidents that could increase system instability. Further, the contractor delivered computer hardware for DCMS to SBA that did not meet contract specifications. In the report that we are releasing today, we identified other logistical challenges that SBA experienced in providing disaster assistance to Gulf Coast hurricane victims. For example, SBA moved urgently to hire more than 2,000 mostly temporary employees at its Ft. Worth, Texas disaster loan processing center through newspaper and other advertisements (the facility increased from about 325 staff in August 2005 to 2,500 in January 2006). SBA officials said that ensuring the appropriate training and supervision of this large influx of inexperienced staff proved very difficult. Prior to Hurricane Katrina, SBA had not maintained the status of its disaster reserve corps, which was a group of potential voluntary employees trained in the agency’s disaster programs. According to SBA, the reserve corps, which had been instrumental in allowing the agency to provide timely disaster assistance to victims of the September 11, 2001 terrorist attacks, shrank from about 600 in 2001 to less than 100 in August 2005. Moreover, SBA faced challenges in obtaining suitable office space to house its expanded workforce. For example, SBA’s facility in Ft. Worth only had the capacity to house about 500 staff whereas the agency hired more than 2,000 mostly temporary staff to process disaster loan applications. While SBA was able to identify another facility in Ft. Worth to house the remaining staff, it had not been configured to serve as a loan processing center. SBA had to upgrade the facility to meet its requirements. Fortunately, in 2005, SBA was also able to quickly reestablish a loan processing facility in Sacramento, California, that had been previously slated for closure under an agency reorganization plan. The facility in Sacramento was available because its lease had not yet expired, and its staff was responsible for processing a significant number of Gulf Coast hurricane related disaster loan applications. As a result of these and other challenges, SBA developed a large backlog of applications during the initial months following Hurricane Katrina. This backlog peaked at more than 204,000 applications 4 months after Hurricane Katrina. By late May 2006, SBA took about 74 days on average to process disaster loan applications, compared with the agency’s goal of within 21 days. As we stated in our July 2006 report, the sheer volume of disaster loan applications that SBA received was clearly a major factor contributing to the agency’s challenges in providing timely assistance to Gulf Coast hurricane. As of late May 2006, SBA had issued 2.1 million loan applications to hurricane victims, which was four times the number of applications issued to victims of the 1994 Northridge, California, earthquake, the previous single largest disaster that the agency had faced. Within 3 months of Hurricane Katrina making landfall, SBA had received 280,000 disaster loan applications or about 30,000 more applications than the agency received over a period of about 1 year after the Northridge earthquake. However, our two reports on SBA’s response to the Gulf Coast hurricanes also found that the absence of a comprehensive and sophisticated planning process contributed to the challenges that the agency faced. For example, in designing DCMS, SBA used the volume of applications received during the Northridge, California, earthquake and other historical data as the basis for planning the maximum number of concurrent agency users that the system could accommodate. SBA did not consider the likelihood of more severe disaster scenarios and, in contrast to insurance companies and some government agencies, use the information available from catastrophe models or disaster simulations to enhance its planning process. Since the number of disaster loan applications associated with the Gulf Coast hurricanes greatly exceeded that of the Northridge earthquake, DCMS’s user capacity was not sufficient to process the surge in disaster loan applications in a timely manner. Additionally, SBA did not adequately monitor the performance of a DCMS contractor or stress test the system prior to its implementation. In particular, SBA did not verify that the contractor provided the agency with the correct computer hardware specified in its contract. SBA also did not completely stress test DCMS prior to implementation to ensure that the system could operate effectively at maximum capacity. If SBA had verified the equipment as required or conducted complete stress testing of DCMS prior to implementation, its capacity to process Gulf Coast related disaster loan applications may have been enhanced. In the report we are releasing today, we found that SBA did not engage in comprehensive disaster planning for other logistical areas—such as workforce or space acquisition planning—prior to the Gulf Coast hurricanes at either the headquarters or field office levels. For example, SBA had not taken steps to help ensure the availability of additional trained and experienced staff such as (1) cross-training agency staff not normally involved in disaster assistance to provide backup support or (2) maintaining the status of the disaster reserve corps as I previously discussed. In addition, SBA had not thoroughly planned for the office space requirements that would be necessary in a disaster the size of the Gulf Coast hurricanes. While SBA had developed some estimates of staffing and other logistical requirements, it largely relied on the expertise of agency staff and previous disaster experiences—none of which reached the magnitude of the Gulf Coast hurricanes—and, as was the case with DCMS planning, did not leverage other planning resources, including information available from disaster simulations or catastrophe models. In our July 2006 report, we recommended that SBA take several steps to enhance DCMS, such as reassessing the system’s capacity in light of the Gulf Coast hurricane experience and reviewing information from disaster simulations and catastrophe models. We also recommended that SBA strengthen its DCMS contractor oversight and further stress test the system. SBA agreed with these recommendations. I note that SBA has completed an effort to expand DCMS’s capacity. SBA officials said that DCMS can now support a minimum of 8,000 concurrent agency users as compared with only 1,500 concurrent users for the Gulf Coast hurricanes. Additionally, SBA has awarded a new contract for the project management and information technology support for DCMS. The contractor is responsible for a variety of DCMS tasks on SBA’s behalf including technical support, software changes and hardware upgrades, and supporting all information technology operations associated with the system. In the report we are releasing today, we identified other measures that SBA has planned or implemented to better prepare for and respond to future disasters. These steps include appointing a single individual to coordinate the agency’s disaster preparedness planning and coordination efforts, enhancing systems to forecast the resource requirements to respond to disasters of varying scenarios, and redesigning the process for reviewing applications and disbursing loan proceeds. Additionally, SBA has planned or initiated steps to help ensure the availability of additional trained and experienced staff in the event of a future disaster. According to SBA officials, these steps include cross-training staff not normally involved in disaster assistance to provide back up support, reaching agreements with private lenders to help process a surge in disaster loan applications, and reestablishing the disaster reserve corps with 750 individuals as of January 2007. However, the report also discusses apparent limitations we found in SBA’s disaster planning processes. For example, SBA has not established a time line for completing the key elements of its disaster management plan, such as cross-training nondisaster staff to provide back up support. In addition, SBA has not assessed whether the agency could leverage outside resources to enhance its disaster planning and preparation efforts, such as information available from disaster simulations and catastrophe models. Finally, SBA had not established a long-term process to help ensure that it could acquire suitable office space to house an expanded workforce to respond to a future disaster. To strengthen SBA capacity to respond to a future disaster, the report recommends that SBA develop time frames for completing key elements of the disaster management plan (and a long-term strategy for acquiring adequate office space); and direct staff involved in developing the disaster plan to continue assessing whether the use of disaster simulations or catastrophe models would enhance the agency’s overall disaster planning process. SBA agreed to implement each of these recommendations. However, it remains to be seen how comprehensive SBA’s final disaster management plan will be and how effectively the agency will respond to a future disaster. Madam Chairwoman, this concludes my prepared statement. I would be happy to answer any questions at this time. For further information on this testimony, please contact William B. Shear at (202) 512- 8678 or [email protected]. Contact points for our Offices of Congressional Affairs and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony included Wesley Phillips, Assistant Director; Marshall Hamlett; Barbara S. Oliver; and Cheri Truett. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Small Business Administration (SBA) helps individuals and businesses recover from disasters such as hurricanes through its Disaster Loan Program. SBA faced an unprecedented demand for disaster loan assistance following the 2005 Gulf Coast hurricanes (Katrina, Rita, and Wilma), which resulted in extensive property damage and loss of life. In the aftermath of these disasters, concerns were expressed regarding the timeliness of SBA's disaster assistance. GAO initiated work and completed two reports under the Comptroller General's authority to conduct evaluations and determine how well SBA provided victims of the Gulf Coast hurricanes with timely assistance. This testimony, which is based on these two reports, discusses (1) challenges SBA experienced in providing victims of the Gulf Coast hurricanes with timely assistance, (2) factors that contributed to these challenges, and (3) steps SBA has taken since the Gulf Coast hurricanes to enhance its disaster preparedness. GAO visited the Gulf Coast region, reviewed SBA planning documents, and interviewed SBA officials. GAO identified several significant system and logistical challenges that SBA experienced in responding to the Gulf Coast hurricanes that undermined the agency's ability to provide timely disaster assistance to victims. For example, the limited capacity of SBA's automated loan processing system--the Disaster Credit Management System (DCMS)--restricted the number of staff who could access the system at any one time to process disaster loan applications. In addition, SBA staff who could access DCMS initially encountered multiple system outages and slow response times in completing loan processing tasks. SBA also faced challenges training and supervising the thousands of mostly temporary employees the agency hired to process loan applications and obtaining suitable office space for its expanded workforce. As of late May 2006, SBA processed disaster loan applications, on average, in about 74 days compared with its goal of within 21 days. While the large volume of disaster loan applications that SBA received clearly affected its capacity to provide timely disaster assistance to Gulf Coast hurricane victims, GAO's two reports found that the absence of a comprehensive and sophisticated planning process beforehand likely limited the efficiency of the agency's initial response. For example, in designing the capacity of DCMS, SBA primarily relied on historical data such as the number of loan applications that the agency received after the 1994 Northridge, California, earthquake--the most severe disaster that the agency had previously encountered. SBA did not consider disaster scenarios that were more severe or use the information available from disaster simulations (developed by federal agencies) or catastrophe models (used by insurance companies to estimate disaster losses). SBA also did not adequately monitor the performance of a DCMS contractor or completely stress test the system prior to its implementation. Moreover, SBA did not engage in comprehensive disaster planning prior to the Gulf Coast hurricanes for other logistical areas, such as workforce planning or space acquisition, at either the headquarters or field office levels. In the aftermath of the Gulf Coast hurricanes, SBA has planned or initiated several measures that officials said would enhance the agency's capacity to respond to future disasters. For example, SBA has completed an expansion of DCMS's user capacity to support a minimum of 8,000 concurrent users as compared with just 1,500 for the Gulf Coast hurricanes. Additionally, SBA initiated steps to increase the availability of trained and experienced disaster staff and redesigned its process for reviewing loan applications and disbursing funds. However, SBA has not established a time line for completing key elements of its disaster management plan, such as cross-training agency staff not typically involved in disaster assistance to provide back up support in an emergency. SBA also has not (1) assessed whether its disaster planning process could benefit from the supplemental use of disaster simulations or catastrophe models and (2) developed a long-term strategy to obtain suitable office space for its disaster staff. While SBA agreed with GAO's report recommendations to address these concerns, it remains to be seen how comprehensive the agency's final disaster plan will be and how the agency will respond to a future disaster.
Over the years, we and others have examined the effects on the refuge system of secondary activities, such as recreation, military activities, and oil and gas activities—which include oil and gas exploration, drilling and production, and transport. Exploring for oil and gas involves seismic mapping of the subsurface topography. Seismic mapping requires surface disturbance, often involving small dynamite charges placed in a series of holes, typically in patterned grids. Oil and gas drilling and production often requires constructing, operating, and maintaining industrial infrastructure, including a network of access roads and canals, local pipelines to connect well sites to production facilities and to dispose of drilling wastes, and gravel pads to house the drilling and other equipment. In addition, production may require storage tanks, separating facilities, and gas compressors. Finally, transporting oil and gas to production facilities or to users generally requires transit pipelines. Department of the Interior regulations generally prohibit the leasing of federal minerals underlying refuges. In addition, under the National Wildlife Refuge System Administration Act of 1966, as amended, the Fish and Wildlife Service (FWS) is responsible for regulating all activities on refuges. The act requires FWS to determine the compatibility of activities with the purposes of the particular refuge and the mission of the refuge system and not allow those activities deemed incompatible. FWS does not apply the compatibility requirement to the exercise of private mineral rights on refuges. However, the activities of private mineral owners on refuges are subject to a variety of other legal restrictions under federal law. For example, the Endangered Species Act of 1973 prohibits the “take” of any endangered or threatened species and provides for penalties for violations of the act; the Migratory Bird Treaty Act prohibits killing, hunting, possessing, or selling migratory birds, except in accordance with a permit; and the Clean Water Act prohibits discharging oil and other harmful substances into waters of the United States and imposes liability for removal costs and damages resulting from a discharge. Also, FWS regulations require that oil and gas activities be performed in a way that minimizes the risk of damage to the land and wildlife and disturbance to the operation of the refuge. The regulations also require that land affected be reclaimed after operations have ceased. At least one-quarter, or 155, of the 575 refuges (538 refuges and 37 wetland management districts) that constitute the National Wildlife Refuge System have past or present oil and gas activities—exploration, drilling and production, transit pipelines, or some combination of these (see table 1).Since 1994, FWS records show that 44 refuges have had some type of oil and gas exploration activities—geologic study, survey, or seismic mapping. We also identified at least 107 refuges with transit pipelines. These pipelines are almost exclusively buried, vary in size, and carry a variety of products, including crude oil, refined petroleum products, and high-pressure natural gas. Transit pipelines may also have associated storage facilities and pumping stations, but data are not available to identify how many of these are on refuges. Over 4,400 oil and gas wells are located within 105 refuges. Although refuges with oil and gas wells are present in every FWS region, they are more heavily concentrated near the Gulf Coast of the United States. About 4 out of 10 wells (41 percent) located on refuges were known to be actively producing oil or gas or disposing of produced water during the most recent 12-month reporting period, as of January 2003. Of the 105 refuges with oil and gas wells, 36 refuges have actively producing wells. The remaining 2,600 wells did not produce oil, gas, or water during the last 12 months; many of these were plugged and abandoned or were dry holes. During the most recent 12-month reporting period, the 1,806 active wells produced 23.7 million barrels of oil and 88,171 million cubic feet of natural gas, about 1.1 and 0.4 percent of total domestic oil and gas production, respectively. Based on 2001 average prices, refuge-based production had an estimated total commercial value of $880 million. Substantial oil and gas activities also occur outside but near refuge boundaries. An additional 4,795 wells and 84 transit pipelines reside within one-half mile of refuge boundaries. The 4,795 wells bound 123 refuges, 33 of which do not have any resident oil and gas wells. The 84 pipelines border 42 different refuges. While FWS does not own the land outside refuge boundaries, lands surrounding refuges may be designated for future acquisition. The overall environmental effects of oil and gas activities on refuge resources are unknown because FWS has conducted few cumulative assessments and has no comprehensive data. Available studies, anecdotal information, and our observations show that some refuge resources have been diminished to varying degrees by spills of oil, gas, and brine and through the construction, operation, and maintenance of the infrastructure necessary to extract oil and gas. The damage varies widely in severity, duration, and visibility, ranging from infrequent small oil spills and industrial debris with no known effect on wildlife, to large and chronic spills causing wildlife deaths and long-term soil and water contamination. Some damage, such as habitat loss because of infrastructure development and soil and water contamination, may last indefinitely while other damage, such as wildlife disturbance during seismic mapping, is of shorter duration. Also, while certain types of damage are readily visible, others, such as groundwater contamination, changes in hydrology, and reduced habitat quality from infrastructure development are difficult to observe, quantify, and associate directly with oil and gas activities. Finally, oil and gas activities on refuges may hinder public access to parts of the refuge or FWS’s ability to manage or improve refuge habitat, such as by conducting prescribed burns or creating seasonal wetlands. The 16 refuges we visited reported oil, gas, or brine spills, although the frequency and effects of the spills varied widely. Oil and gas spills can injure or kill wildlife by destroying the insulating capacity of feathers and fur, depleting oxygen available in water, or exposing wildlife to toxic substances. Brine spills can be lethal to young waterfowl, damage birds’ feathers, kill vegetation, and decrease nutrients in water. Even small spills may contaminate soil and sediments if they occur frequently. For instance, a study of Atchafalaya and Delta National Wildlife Refuges in Louisiana found that oil contamination present near oil and gas facilities is lethal to most species of wildlife, even though refuge staff were not aware of any large spills. Constructing, operating, and maintaining the infrastructure necessary to produce oil and gas can harm wildlife by reducing the quantity and quality of habitat. Infrastructure development can reduce the quality of habitat through fragmentation, which occurs when a network of roads, canals, and other infrastructure is constructed in previously undeveloped areas of a refuge. Fragmentation increases disturbances from human activities, provides pathways for predators, and helps spread nonnative plant species. For example, officials at Anahuac and McFaddin National Wildlife Refuges in Texas said that disturbances from oil and gas activities are likely significant and expressed concern that bird nesting may be disrupted. However, no studies have been conducted at these refuges to determine the effect of these disturbances. Infrastructure networks can also damage refuge habitat by changing the hydrology of the refuge ecosystem, particularly in coastal areas. In addition, industrial activities associated with extracting oil and gas have been found to contaminate wildlife refuges with toxic substances such as mercury and polychlorinated biphenyls (PCBs). Mercury and PCBs were used in equipment such as compressors, transformers, and well production meters, although generally they are no longer used. New environmental laws and industry practice and technology have reduced, but not eliminated, some of the most detrimental effects of oil and gas activities. For example, Louisiana now generally prohibits using open pits to store production wastes and brine in coastal areas and discharging brine into drainages or state waters. Also, improvements in technology may allow operators to avoid placing wells in sensitive areas such as wetlands. However, oil and gas infrastructure continues to diminish the availability of refuge habitat for wildlife, and spills of oil, gas, and brine that damage fish and wildlife continue to occur. In addition, several refuge managers reported that operators do not always comply with legal requirements or follow best industry practices, such as constructing earthen barriers around tanks to contain spills, covering tanks to protect wildlife, and removing pits that temporarily store fluids used during well maintenance. Oil and gas operators have taken steps, in some cases voluntarily, to reverse damages resulting from oil and gas activities, but operators have not consistently taken such steps, and the adequacy of these steps is not known. For example, an operator at McFaddin National Wildlife Refuge removed a road and a well pad that had been constructed to access a new well site and restored the marsh damaged by construction after the well was no longer needed. In contrast, in some cases, officials do not know if remediation following spills is sufficient to protect refuge resources, particularly for smaller oil spills or spills into wetlands. FWS does not have a complete and accurate record of spills and other damage resulting from refuge-based oil and gas activities, has conducted few studies to quantify the extent of damage, and therefore does not know its full extent or the steps needed to reverse it. The lack of information on the effects of oil and gas activities on refuge wildlife hinders FWS’s ability to identify and obtain appropriate mitigation measures and to require responsible parties to address damages from past activities. Lack of sufficient information has also hindered FWS’s efforts to identify all locations with past oil and gas activities and to require responsible parties to address damages. FWS does not know the number or location of all abandoned wells and other oil and gas infrastructure or the threat of contamination they pose and, therefore, its ability to require responsible parties to address damages is limited. However, in cases where FWS has performed studies, the information has proved valuable. For example, FWS funded a study at some refuges in Oklahoma and Texas to inventory locations containing oil and gas infrastructure, to determine if they were closed legally, and to document their present condition. FWS intends to use this information to identify cleanup options with state and federal regulators. If this effort is successful, FWS may conduct similar studies on other refuges. FWS’s management and oversight of oil and gas activities varies widely among refuges. Management control standards for federal agencies require federal agencies to identify risks to their assets, provide guidance to mitigate these risks, and monitor compliance. For FWS, effectively managing oil and gas activities on refuges would entail, at a minimum, identifying the extent of oil and gas activities and their attendant risks, developing procedures to minimize damages by issuing permits with conditions to protect refuge resources, and monitoring the activities with trained staff to ensure compliance and accountability. However, the 16 refuges we visited varied widely in the extent to which these management practices occur. Some refuges identify oil and gas activities and the risks they pose to refuge resources, issue permits that direct operators to minimize the effect of their activities on the refuge, monitor oil and gas activities with trained personnel, and charge mitigation fees or pursue legal remedies if damage occurs. For example, two refuges in Louisiana collect mitigation fees from oil and gas operators that are then used to pay for monitoring operator compliance with permits and state and federal laws. In contrast, other refuges do not issue permits or collect fees, are not aware of the extent of oil and gas activities or the attendant risks to refuge resources, and provide little management and oversight. Management and oversight of oil and gas activities varies for two primary reasons. First, FWS’s legal authority to require oil and gas operators to obtain access permits with conditions to protect refuge resources varies considerably depending upon the nature of the mineral rights. For reserved mineral rights—cases where the property owner retained the mineral rights when selling the land to the federal government—FWS can require permits only if the property deed subjects the rights to such requirements. For outstanding mineral rights—cases where the mineral rights were separated from the surface lands before the government acquired the property—FWS has not formally determined its position regarding its authority to require access permits. However, we believe, based on statutory language and court decisions, that FWS has the authority to require owners of outstanding mineral rights to obtain permits. Second, refuge managers lack sufficient guidance, resources, and training to properly monitor oil and gas operators. Current FWS guidance regarding the management of oil and gas activities where there are private mineral rights is unclear, according to refuge staff. Refuge staff said they also lack sufficient resources to oversee oil and gas activities, which are substantial at some refuges. Only three refuges in the system have staff dedicated full-time to monitoring these activities, and some refuge staff cite a lack of time as a reason for limited oversight. Staff also cite a lack of training as limiting their capability to oversee oil and gas operators; FWS has offered only one oil- and gas-related workshop in the last 10 years. On a related management issue, FWS has not always thoroughly assessed property for possible contamination from oil and gas activities prior to its acquisition, even though FWS guidance requires an assessment of all possible contamination. For example, FWS acquired one property that is contaminated from oil and gas activities because staff did not adequately assess the subsurface property before acquiring it. After acquiring the property, FWS found that large amounts of soil were contaminated with oil. FWS has thus far spent $15,000, and a local conservation group spent another $43,000, to address the contamination. We found that the guidance and oversight provided to FWS regional and refuge personnel were not adequate to ensure that the requirements were being met. The National Wildlife Refuge System is a national asset established principally for the conservation of wildlife and habitat. While federally owned mineral rights underlying refuge lands are generally not available for oil and gas exploration and production, that prohibition does not extend to the many private parties that own mineral rights underlying refuge lands. The scale of these activities on refuges is such that some refuge resources have been diminished, although the extent is unknown without additional study. Some refuges have adopted practices—for example, developing data on the nature and extent of activities and their effects on the refuge, overseeing oil and gas operators, and training refuge staff to better carry out their management and oversight responsibilities—that limit the impact of these activities on refuge resources. If these practices were implemented throughout the agency, they could provide better assurance that environmental effects from oil and gas activities are minimized. In particular, in some cases, refuges have issued permits that establish operating conditions for oil and gas activities, giving the refuges greater control over these activities and protecting refuge resources before damage occurs. However, FWS does not have a policy requiring owners of outstanding mineral rights to obtain a permit, although we believe FWS has this authority, and FWS can require owners of reserved mineral rights to obtain a permit if the property deed subjects the rights to such requirements. Confirming or expanding FWS’s authority to require reasonable permit conditions and oversee oil and gas activities, including cases where mineral rights have been reserved and the property deed does not already subject the rights to permit requirements, would strengthen and provide greater consistency in FWS’s management and oversight. Such a step could be done without infringing on the rights of private mineral owners. Finally, FWS’s land acquisition guidance is unclear and oversight is inadequate, thereby exposing the federal government to unexpected cleanup costs for properties acquired without adequately assessing contamination from oil and gas activities. In our report, we made several recommendations to improve the framework for managing and overseeing oil and gas activities on national wildlife refuges, including (1) collecting and maintaining better data on oil and gas activities and their environmental effects, and ensuring that staff resources, funding, and training are sufficient and (2) determining FWS’s existing authority over outstanding mineral rights. We also recommended that the Secretary of the Interior, in coordination with appropriate Administration officials, seek from Congress any necessary additional authority over outstanding mineral rights, and over reserved mineral rights, to ensure that a consistent and reasonable set of regulatory and management controls are in place for all oil and gas activities occurring on national wildlife refuges. The Department of the Interior’s response to our recommendations was mixed. The department was silent on our recommendations that it should collect and maintain better data on oil and gas activities and their effects and that it should ensure that staff are adequately trained to oversee oil and gas activities. Also, while the department was silent on whether it should review FWS’s authority to regulate outstanding mineral rights, it raised procedural concerns about our recommendation that it seek any necessary additional authority from Congress to regulate private mineral rights. We continue to believe that our recommendation is warranted. In light of the department’s opposition, we suggested that the Congress consider expanding the FWS’s authority to enable it to consistently regulate the surface activities of private mineral owners on refuges. Thank you Mr. Chairman and Members of the Subcommittee. That concludes my prepared statement. I would be pleased to respond to any questions that you may have. For further information on this testimony, please contact Barry T. Hill at (202) 512-3841. Individuals making key contributions to this testimony included Paul Aussendorf, Robert Crystal, Jonathan Dent, Doreen Feldman, and Bill Swick. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The 95-million acres in the National Wildlife Refuge System are the only federal lands primarily devoted to the conservation and management of fish, wildlife, and plant resources. While the federal government owns the surface lands in the system, in many cases private parties own the subsurface mineral rights and have the legal authority to explore for and extract oil and gas. This testimony is based on an August 2003 report (GAO-03-517) in which GAO determined the extent of oil and gas activity on refuges, identified the environmental effects, and assessed the Fish and Wildlife Service's management and oversight of those activities. About one-quarter (155 of 575) of all refuges have past or present oil and gas activities, some dating to at least the 1920s. Activities range from exploration to drilling and production to pipelines transiting refuge lands. One hundred five refuges contain a total of 4,406 oil and gas wells--2,600 inactive wells and 1,806 active wells. The 1,806 wells, located at 36 refuges, many around the Gulf Coast, produced oil and gas valued at $880 million during the last 12-month reporting period, roughly 1 percent of domestic production. Thirty-five refuges contain only pipelines. The Fish and Wildlife Service has not assessed the cumulative environmental effects of oil and gas activities on refuges. Available studies, anecdotal information, and GAO's observations show that the environmental effects of oil and gas activities vary from negligible, such as effects from buried pipelines, to substantial, such as effects from large oil spills or from large-scale infrastructure. These effects also vary from the temporary to the longer term. Some of the most detrimental effects of oil and gas activities have been reduced through environmental laws and improved practices and technology. Moreover, oil and gas operators have taken steps, in some cases voluntarily, to reverse damages resulting from oil and gas activities. Federal management and oversight of oil and gas activities varies widely among refuges--some refuges take extensive measures, while others exercise little control or enforcement. GAO found that this variation occurs because of differences in authority to oversee private mineral rights and because refuge managers lack enough guidance, resources, and training to properly manage and oversee oil and gas activities. Greater attention to oil and gas activities by the Fish and Wildlife Service would increase its understanding of associated environmental effects and contribute to more consistent use of practices and technologies that protect refuge resources.
Income in retirement may come from several sources, including (1) Social Security, (2) payments from employment-based DB plans, (3) savings in retirement plans, such as in a 401(k) plan or IRA, including the return on and (4) other sources, including non-retirement savings, these savings;home equity, and wages. (1) Social Security: Social Security pays benefits to retirees, their spouses, and their survivors, as well as to some disabled workers. According to the Social Security Administration (SSA), as of 2012, 86 percent of households age 65 and older received Social Security benefits. Benefits are paid to workers who meet requirements for the time they have worked in “covered employment” – jobs through which workers pay Social Security taxes, which cover about 96 percent of U.S. workers, according to SSA. Workers can claim benefits starting at age 62 (or when they become disabled), but for retiring workers the monthly benefit they receive increases the longer they delay receiving them, up until age 70.history and are progressive, meaning that Social Security replaces a higher percentage of earnings for lower-income workers and their dependents than for higher-income workers. Monthly Social Security benefits are based on a worker’s earnings Social Security benefits offer two main advantages: they are a monthly stream of payments that continue until death and they adjust annually for cost-of-living increases. According to the 2014 report from the Social Security Board of Trustees, the Old-Age and Survivors Insurance (OASI) trust fund from which Social Security benefits are paid is projected to become depleted in 2034, at which point continuing income is projected to be sufficient to cover just 75 percent of scheduled benefits. This projection raises the possibility of changes to Social Security benefits, taxation, or both before the depletion date. (2) Defined Benefit Plans: these plans are “traditional” employment-based pension plans that offer benefits typically determined by a formula based on factors specified by the plan, such as salary and years of service. DB plans typically offer pension benefits in the form of an annuity that provides a monthly payment for life, although some plans also offer a lump-sum distribution option. An annuity can help to protect a retiree against risks, including the risk of outliving one’s assets (longevity risk), and may also offer survivor benefits. However, DB plans carry the risk that a plan sponsor may freeze or terminate the plan. If a private-sector plan terminates with insufficient assets to pay promised benefits, the Pension Benefit Guaranty Corporation (PBGC), a federal government corporation, provides plan insurance and pays promised benefits subject to certain statutory limits, which may result in some beneficiaries getting reduced benefits. (3) Retirement Savings: Introduced over 30 years ago, two primary types of retirement savings vehicles currently exist: employment-sponsored DC plans (such as 401(k) plans) and IRAs. For both types, benefits accrue in the form of account balances, which grow from contributions made by workers (and sometimes by their employers) and investment returns. Examples of employer-sponsored DC plans include 401(k) plans, 403(b) plans, and similar plans for which employers can offer payroll deductions, employer contributions to employee accounts, or both. Individuals can also save for retirement through IRAs, which allow individuals to make contributions for retirement without participating in an employment- sponsored plan. DC plans and IRAs provide tax advantages, portability of savings, and transparency of known account balances. However, they also place the primary responsibility on individuals to participate in, contribute to, and manage their accounts throughout their working careers, and to manage their savings throughout retirement in order to keep from running out of money. For 2015, individuals can contribute up to $5,500 in IRAs ($6,500 for those age 50 or older), while the contribution limit for 401(k) plans is $18,000 ($24,000 for those age 50 or older). Contributions to 401(k) plans and traditional IRAs are not subject to tax when made (26 U.S.C. §§ 402(e)(3) and 219(a) and (e), respectively); distributions or withdrawals of principal or earnings from them are subject to tax (26 U.S.C. §§ 402(a) and 408(c)(1), respectively). Contributions to Roth IRAs are not tax-deductible, but after one has been established for 5 years, upon reaching age 59½, an individual may make withdrawals of principal or earnings not subject to tax. 26 U.S.C. § 408A(c) and (d). revenue loss associated with these accounts included $51.8 billion for DC plans and $16.2 billion for IRAs. Employment-based retirement plan coverage, especially in the private sector, has shifted from DB to DC plans. According to the Department of Labor, as of 2012 private-sector DB plans had almost 40 million participants, while DC plans had about 91 million. In contrast, in 1975, about three-quarters of private-sector pension participants had DB plans, and half of all participants in 1990 had DB plans. Reserve data, as of the third quarter of 2014, U.S. DB plans held about $11.2 trillion in assets, IRA assets totaled about $7.3 trillion and DC assets accounted for about $6.2 trillion. Rollovers from 401(k) plans and other employment-sponsored plans are the predominant source of contributions to IRAs. These figures may double-count individuals who have both a DB and DC plan. U.S. Department of Labor, Employee Benefits Security Administration, “Private Pension Plan Bulletin Historical Tables and Graphs.” December 2014. Considering imputed rent income treats owner-occupied housing neutrally compared to renter-occupied housing. For example, consider two homeowners who each live in their homes and pay a $1,000 mortgage. If they moved into each other’s home and received $1,000 per month rent, that $1,000 would be considered income, even though nothing has changed about either household’s balance sheet or net expenses. an important source of income for some households with a member age 65 or older, especially for those with a spouse younger than 62 who is not yet eligible to receive Social Security benefits. According to our analysis of data from the 2013 SCF, 52 percent of households age 55 and older have no retirement savings in a DC plan or IRA, and Social Security provides most of the retirement income for about half of households age 65 and older. Among the 48 percent of households age 55 and older with some retirement savings, the median amount is approximately $109,000–commensurate to an inflation- protected annuity of $405 per month at current rates for a 65-year-old. Households that have sizeable retirement savings are more likely than households with lower saving to have other resources, including a higher likelihood of expecting retirement income from a DB plan. Nearly 30 percent of households age 55 and older have neither retirement savings nor a DB plan (see fig. 1). Social Security remains the largest component of household income in retirement, making up an average of 52 percent of household income for those age 65 and older. About 55 percent of households age 55-64 have less than $25,000 in retirement savings, including 41 percent who have zero (see fig. 2 for additional detail). Most of the households in this age group have some other resources or benefits from a DB plan, but 27 percent of this age group have neither retirement savings nor a DB plan. Among households age 55-64, the 41 percent with no retirement savings have few other financial resources but they are less likely to have debt than those with retirement savings.have less than $25,000 in total financial assets, such as in savings accounts or non-retirement investments. Compared to those with retirement savings, these households have about a third of the median income, about one-fifteenth of the median net worth, and are less likely to be covered by a DB plan (see table 1). Regarding debt, households without retirement savings are less likely to have debt than households with savings (about 70 percent compared to 84 percent). Their debt levels are comparable, though, as about 20 percent of households from each For example, around 85 percent category have debt amounts that are more than twice their annual income. Perhaps of greatest concern are the 27 percent of all households age 55- 64 that have neither retirement savings nor a DB plan. Their median net worth is about $9,000, and 91 percent have less than $25,000 in financial assets. These households’ median home equity is about $53,000,savings or a DB plan have. which is less than half of what households with retirement Not surprisingly, they have approximate median income of $21,000. About half of these households had wage or salary income, compared to 82 percent of households age 55-64 with some retirement savings or a DB plan. This indicates that a smaller portion of these households are likely working, which may limit their ability to accumulate retirement savings. About 46 percent had Social Security income, indicating that they may have claimed before the full retirement age and would receive reduced monthly benefits. For the 59 percent of households age 55-64 with some retirement savings, we estimate that the median amount saved is about $104,000, which is equivalent to an insured, inflation-protected annuity of $310 per month for a 60-year-old. While about 15 percent of these households have retirement savings amounts over $500,000, 11 percent have retirement savings below $10,000 and 24 percent have savings of less than $25,000 (see table 2 for additional detail). A savings amount of $25,000 is equivalent to an insured, inflation-protected annuity of $74 per month for a 60-year-old. Both retirement savings and DB plan coverage rises with income levels for age 55-64 households (see table 3). Across income quintiles, a similar percentage of households have a paid-off mortgage and debt levels above twice their income, whereas retirement savings and DB plan coverage generally increase with income. Turning to older households, retirement savings among those age 65-74 shows a distribution similar to those age 55-64, though a larger proportion has no retirement savings (52 percent).about 10 percent have more than $500,000 in savings. Similar to the younger group, Another similarity is that many households age 65-74 with no retirement savings have few other resources to draw upon in retirement as measured by our indicators (see table 4). Compared to those in the same age group with retirement savings, households without retirement savings have about one-seventh the net worth, and fewer have a DB plan. Unlike households age 55-64, the debt profile for households without retirement savings is not substantially better than for households with some retirement savings. Similar to households age 55-64, a closer look at the 27 percent of households age 65-74 with no retirement savings and no DB plan reveals that they have very low levels of resources to draw upon for retirement income. This group has a median net worth of about $57,000, which is around one-sixth the net worth of other households of this age. Compared to households with some retirement savings or a DB plan, households in this age group generally have lower home ownership rates (about 67 percent compared to 93 percent) and less home equity when they do own homes (median home equity is about $100,000, compared to $148,000). For the 48 percent of households age 65-74 that have some retirement savings, we estimate that the median amount is $148,000, comparable to an insured, inflation-protected annuity of $649 per month for a 70-year- old at current rates. About one in five of these households has retirement savings amounts over $500,000, while 16 percent have savings less than $25,000 (see table 5 for additional detail). For all households age 65-74, median annual income is about $47,000 and Social Security makes up on average 44 percent of income for households in this age group, larger than any other income source. About 90 percent of all households in this age range receive some Social Security income, and the median amount they receive is approximately About 41 percent of households in this age range rely on $19,000.Social Security for over half of their income, while 14 percent rely on Social Security for more than 90 percent of their income. While Social Security is, on average, the largest component of household income in retirement, other sources also play a role in funding retirement for households age 65-74. Income from work and pension-based annuities, such as DB plans, contribute about a fifth of household income each, on average. Distributions from retirement savings make up a relatively small portion of average household income at 4 percent. Because Social Security and DB plans represent a relatively large portion of retiree income, it follows that much of the household income for this age group has some assurance that it will last a lifetime. Among households age 65-74, the prevalence of both retirement savings and DB plans generally increases with income (see table 6). As with the younger age group, not only do a larger proportion of higher-income households have some retirement savings, but the amount they have saved is also larger. Similarly, the annual amount they receive from their DB plan increases with income. Social Security makes up a larger share of household income for households with no retirement savings, which is not surprising as these households have lower incomes. The 52 percent of households age 65- 74 with no retirement savings rely primarily on Social Security for income in retirement, as it makes up 57 percent of their household income on average (see figure 3). These households have median income of approximately $29,000, and 25 percent of them rely on Social Security for more than 90 percent of their income. Those in the same age range who have some retirement savings have a median income of $76,000. Social Security makes up on average 31 percent of income for those with savings, about the same percentage that wage or salary income contributes. Reflecting Social Security’s progressive benefit structure, 86 percent of those in the lowest income quintile receive more than half of their household income from Social Security, while 66 and 44 percent of those in the second and third quintiles do, respectively. Households age 65-74 with no retirement savings or DB plan have about one-third the income of other households in the same age group and are even more likely to rely on Social Security. Specifically, their median income is about $19,000 compared to $60,000 for the other group. Only about a quarter of these households have wage income, compared to 49 percent of other households in this age range, while 45 percent of them relied on Social Security for over 90 percent of their income, compared to 3 percent for households with either some retirement savings or a DB plan. Households age 75 and older have even fewer retirement assets than younger households, and only 29 percent have retirement savings. About 35 percent have neither retirement savings nor a DB plan, though a larger percentage of households in this age group have a DB plan than those nearing retirement (55 percent compared to 40 percent for households age 55-64). Of those households that have savings, the median savings is approximately $69,000, which is commensurate to an insured, inflation-protected annuity of $467 per month at current rates for an 80- year-old. Social Security provides the bulk (on average 61 percent) of household income for those 75 and older (see fig. 4). The median income for households age 75 and older is about $27,000, and the median Social Security income is approximately $17,000. When compared to younger households age 65-74, Social Security makes up a larger share of household income for retirees age 75 and older, with 62 percent of these households relying on Social Security for more than 50 percent of their income, and 22 percent relying on Social Security for more than 90 percent of their income. Moreover, according to Census data, about 43 percent of people 65 years and older would have incomes below the poverty level if they did not receive Social Security. As with the younger age groups, households age 75 and older with no retirement savings have fewer resources based on our indicators than those with some retirement savings, as one might expect. For example, their median net worth is about $127,000, compared to $435,000 for Additionally, same-aged households with some retirement savings.households with no retirement savings have lower homeownership rates than other households in the same age range (75 percent compared to 93 percent) and a smaller proportion own their homes outright (55 percent compared to 74 percent). A larger share of households in this age range have paid off their mortgages than have younger groups. Similarly, households age 75 and older with no retirement savings have lower median incomes than those with some retirement savings. Specifically, they have about half the median income as households with some retirement savings (about $24,000, compared to $47,000). Retirement savings distributions contribute, on average, about 17 percent of household income among those with some retirement savings, adding a median amount of $4,000 to these households’ income. Households with retirement savings in this age group obtain just under half their income from Social Security on average (46 percent). We are 95 percent confident that the median income for households with no retirement savings is between $22,041 and $25,015, while it is between $39,833 and $53,729 for other households. Economists broadly agree that a conceptual benchmark measure for adequate retirement saving is an amount that will, along with other sources of retirement income, allow a household to maintain its pre- retirement standard of living into retirement. However, there is no consensus about how much income this standard requires. Economists and financial planners generally agree that many retirees do not need to replace 100 percent of working income in order to maintain their standard of living because most retirees probably have reduced expenses—for example, no longer needing to provide for payroll taxes, retirement saving, and commuting expenses—relative to when they were working. Other big expenses that many households may face while working but not while retired include the cost of raising children (who are likely grown and financially independent by the parents’ retirement age) and of housing if homeowners pay off their mortgage by retirement. Conversely, health costs may represent a greater expense for a household in retirement than while working. Setting a specific target for, and even calculating, the “replacement rate”– a household’s post-retirement income as a percentage of pre-retirement income–required to maintain a household’s standard of living requires many complicated assumptions. There is broad agreement over some aspects of replacement rates, at least in concept if not necessarily in practical application to calculations. Because higher-income households tend to pay a higher percentage of their income in taxes and save more for retirement while working, they generally require a lower replacement rate in retirement when these expenses decline; for the opposite reasons, lower-income households generally require higher replacement rates. For these reasons, there is no single replacement rate that represents a “success” for retirement income. Several studies have attempted to evaluate the adequacy of retirement income or project the likelihood of current workers having sufficient retirement income. Some of these studies attempt to judge the retirement readiness of workers by using data on consumption, income, and wealth for working-age households and projecting a replacement rate at retirement; they then compare this projection to a target replacement rate that they estimate to be enough to maintain a standard of living in retirement. As Table 7 shows, different studies use different replacement rate or other benchmarks for retirement income adequacy. The Center for Retirement Research at Boston College produces a National Retirement Risk Index (NRRI) based on data from the 2013 SCF and concluded that 52 percent of households faced risk of having insufficient retirement income to maintain their standard of living. This percentage is almost the same as the one calculated from the 2010 SCF and is up from 44 percent in 2007. However, at-risk percentages vary considerably by sub-group in the NRRI. For example, Boston College calculates that 60 percent of households with income in the lowest third of the income distribution are at risk, and 43 percent of households in the highest-third are at risk of having insufficient retirement income to maintain their pre-retirement standard of living. The NRRI also finds a greater percentage of households age 30-39 at risk than age 50-59. The Employee Benefit Research Institute (EBRI) uses its Retirement Security Projection Model to project the percentage of workers at risk of having retirement income that is inadequate to cover minimum retirement expenditures.falling short of target retirement income. However, EBRI’s projections show a much higher percentage of lower-income households at risk of falling short on retirement income: 12.5 percent of those born 1948-1954 in the highest-income quartile compared to 86.8 percent of the same cohort in the lowest-income quartile. In a 2012 study, Aon Hewitt projects savings of its sample against a target 85 percent replacement rate and estimates that 85 percent of workers will fail to hit this target by age 65. Even when focusing on “full career” workers who have the potential to contribute to a retirement account for at least 30 years, 71 percent of these workers still are projected to fall short of the benchmark. EBRI’s projections show about 44 percent of their sample The 2015 National Institute on Retirement Security (NIRS), instead of using a projection model, uses the 2013 SCF to compare net worth among workers to financial industry-suggested savings benchmarks at different ages. NIRS finds that approximately two-thirds of workers have savings below the suggested benchmark, enough for an 85 percent replacement rate target at age 67. A 2012 Urban Institute study focuses on Baby Boom workers and retirees and sets a 75 percent replacement rate target, but measures retirement income at age 70. Depending on alternative assumptions they made about whether retirees annuitized retirement assets and how they calculated pre-retirement income, they find about 30 to 40 percent of their sample fell short of their replacement rate target. Other studies have somewhat more optimistic conclusions about whether American workers are likely to have enough income in retirement to maintain their standard of living. A 2006 study by Scholz, Seshadri, and Khitatrakun uses the Health and Retirement Study to compare individuals’ earnings and savings history against wealth predictions of a lifecycle model over a household’s lifetime, with different targets for different household characteristics.have savings below the predictions of their model. Their findings emphasize the impact of children and the progressive benefit structure of Social Security, which replaces a higher percentage of income for lower- income earners than higher-income earners, as key factors explaining how such a high percentage of households can reach retirement income They find that only 16 percent of households adequacy.Khitatrakun find that the percentage of households with adequate retirement income declines with earnings: about 30 percent of lowest- decile earners undersave in their estimation, compared to 5 percent of the highest decile. A 2012 study by Hurd and Rohwedder similarly uses a lifecycle framework that estimates consumption paths of Health and Retirement Study households, based on consumption in the years prior to retirement, and projects which households have enough financial resources to maintain this consumption path until death. The studies by Hurd and Rohwedder and Scholz, Seshadri, and Khitatrakun assume that households value consumption later in retirement less than earlier, in part reflecting the declining probability of being alive later in life;. This assumption lowers consumption targets later in retirement than they would under an assumption that households smooth their consumption throughout retirement. Hurd and Rohwedder found that 23 percent of married couples and 51 percent of single persons fall short of these targets. However, they find that single households and those with less education are more likely to be unprepared for retirement by the study’s targets. However, even with these factors, Scholz, Seshadri, and A 2012 study from the Investment Company Institute (ICI) and a 2014 study by Andrew Biggs and Sylvester Schieber also express doubt that Americans are not saving adequately for retirement, although they do not set an adequacy benchmark based on replacement rate or standard of living targets against which to measure household savings. ICI argues that a “five-tiered pyramid” of retirement assets, made up of Social Security, employment-based DB and DC pensions, IRAs, housing equity, and other financial assets, has successfully provided for retirees. However, while they find that, based on 2010 data, most near-retiree households across income groups have some assets in an employment- sponsored plan or an IRA, they also find that the percentage of such households rises with income: about half of households with income less than $30,000 to about 95 percent of households with income of at least $80,000. ICI also cites a lower percentage of 65-and-older Americans living in poverty than the overall population as evidence of success with the retirement system. They conclude that “on average” households are able to maintain their standard of living in retirement. The Biggs and Schieber study argues that reported replacement rates published in prior Social Security Trustees reports understated the extent to which Social Security benefits replace earnings because Social Security uses lifetime earnings (instead of final-year earnings) and indexes earnings to average wages instead of average prices. These assumptions, they argue, overstate income during working years, and thus, published estimates understate how much Social Security benefits replace as a percentage of working income. Biggs and Schieber, like Scholz, Seshadri, and Khitatrakun, also argue that some studies set too- high replacement rate targets because they ignore the favorable economic impact of children leaving the household. Assumptions about income targets and methodology help drive the conclusions of these different studies. Some considerations in evaluating all of these studies include: How income and expenses may change during retirement. One limitation of replacement rate calculations is that they suggest a fixed amount of retirement income and expenses. In reality, retirement income may vary throughout retirement, depending in part on the degree to which a household’s income is annuitized. To the extent that retirees have to manage savings in lump-sum form, such as in an IRA or DC plan, they face risk from investment returns and outliving their resources, among other factors. Even annuitized income, if not adjusted for inflation, may lose purchasing power, especially over longer retirement periods. To the extent that Social Security makes up a significant portion of retirement income, as we find earlier in this report, the amount and purchasing power of income throughout retirement may be more predictable, as would annuitized income from a DB plan or any other annuitized wealth (if inflation adjusted). Similarly, expenses, especially health care, may be neither steady nor predictable in retirement. Finally, for women approaching or in retirement, becoming divorced, widowed or unemployed can have detrimental effects on their income security. How the impact of children on target income may be complicated. To the extent that children become independent long before parents retire, households approaching retirement may already have adjusted to higher levels of consumption, possibly raising their standard of living and required replacement rates. In retirement, the extent to which grown children may remain partially dependent on retired parents also would lessen the extent to which the cost of raising children is a foregone expense in retirement. How income from Social Security may change. The 2014 Social Security Trustees; Report projects the Old-Age and Survivors Insurance Trust Fund, which pays Social Security retirement benefits, to become insolvent in 2034, at which point revenues are projected to be enough to cover 75 percent of scheduled benefits.insolvency or because of reforms to extend the solvency of the trust fund, this could represent a major challenge to households who rely heavily on Social Security for retirement income. Similarly, if reforms raised payroll taxes on workers, this could affect their ability to save for retirement. Further, as the normal retirement age continues to rise for receiving full benefits (gradually from 65 for beneficiaries born in 1937 or earlier to 67 for those born in 1960 or later), future Social Security replacement rates will fall unless workers delay claiming until they are older. Surveys indicate that workers age 55 and older generally plan to retire at an older age and work more in retirement than current retirees actually did. These plans may indicate that the current cohort of workers nearing retirement will in fact work longer than current retirees did. However, if these expectations for retiring later prove unrealistic or do not come to fruition, workers’ retirement security may be at risk, since workers may have fewer years to work and save for retirement than they are planning. According to the 2015 Employee Benefit Research Institute’s (EBRI) Retirement Confidence Survey, among workers 55 and older, nearly half say they plan to retire at 66 or older, while 14 percent of current retirees report having done so (see fig. 5). Gallup polling indicates that plans to retire later may be associated with low confidence in retirement savings. In a 2013 Gallup survey, baby boomers who strongly disagree with the statement “you have enough money to do everything you want to do” plan to retire at 73, while those who strongly agree with the statement plan to retire at 66.expectations also vary by household income, with workers from lower- income households more likely to plan to retire at older ages than workers from higher-income households. Furthermore, among pre-retirees age 45 and older, 31 percent of those making less than $50,000 a year, 14 percent of those making between $50,000 and $99,000 a year, and 7 percent of those making $100,000 or more a year do not plan to retire. Among those who said they do not plan to retire, the dominant reason was the expectation of never having enough money to retire (55 percent). According to a 2013 Society of Actuaries survey, retirement In the EBRI study, where respondents could report multiple Other events outside a worker’s control, such as the 2007-9 recession, may have caused workers to change their retirement plans. The recession had disparate effects on people approaching retirement, causing some to retire earlier than expected, likely when they could not find employment, and others to retire later, likely because their retirement savings balances had dropped. According to a 2013 Federal Reserve study, 38 percent of people age 55-64 and 47 percent of people age 65- 74 who had not yet retired reported that they delayed retirement since the recession, and 21 percent of people age 55-64 and 13 percent of people 65-74 who had retired reported retiring earlier than planned. The 2013 survey sponsored by the Society of Actuaries found that current workers age 45 and older expect similar sources of income in retirement as current retirees are receiving, with a few key exceptions. Specifically, in one exception, 59 percent of pre-retirees expect to receive income from a DB plan while 73 percent of retirees receive income from a DB plan; in another, 81 percent of pre-retirees expect income from an employment- sponsored retirement savings plan, while 53 percent of retirees receive this. Most notably, 57 percent of pre-retirees expect employment, including self-employment, to constitute a source of income in retirement, while 28 percent of retirees report having this. The Federal Reserve survey also suggests that many workers may unrealistically expect to continue working as long as possible or transition to new work when they “retire”. Only 18 percent of workers approaching retirement who have done some planning for retirement expect to stop work completely at retirement, while 59 percent of workers plan to work as long as possible, or plan to shift jobs in retirement by finding a different job or working for themselves. This contrasts with the experiences of retirees, among whom 29 percent shifted jobs in retirement (see fig. 6). As compared to people age 55-64, many people over 65 report being able to manage financially. According to a Federal Reserve survey, 72 percent of people age 65-74 and 84 percent of people 75 and older say they are managing okay or better financially, while only 59 percent of people age 55-64 report they are managing okay or better financially. Asked among workers who have done some planning for retirement. Retirees were able to report multiple responses for this question. While most people 65 and over have confidence in their retirement security, levels of confidence among people approaching retirement age are lower. According to an older EBRI survey, conducted in 2014, 69 percent of retirees say that their experience in retirement with respect to their finances has been about the same or better than they expected it to be. According to the 2013 Survey of Consumer Finances, two-thirds of households age 65-74 say their received or expected retirement income is at least enough to maintain living standards (66 percent). On the other hand, just over half (52 percent) of people age 55-64 say retirement income they expect or receive will be enough to maintain living standards. However, confidence in affording certain types of expenses in retirement varies, suggesting that expenses such as for long-term care may be a cause of concern for retiree financial security. According to the EBRI study, 82 percent of retirees are very or somewhat confident they will have enough money to take care of basic expenses in retirement, 78 percent are very or somewhat confident they will have enough to take care of medical expenses during retirement, and 59 percent are very or somewhat confident they will have enough money to pay for long-term care should they need it during retirement. Moreover, poverty rates are higher for people approaching retirement and people who are 75 and older. According to the Current Population Survey, about 8 percent of people age 65-74 and 11 percent of those age 75-84 are in poverty, which is also the poverty rate for people age 55-64. Twelve percent of people 85 and older are in poverty.Supplemental Poverty Measure, an alternate poverty measure, found that 14 percent of people age 55-64 are in poverty according to this The definition. For people 65-74, this number decreases to 12 percent, and then increases for the oldest Americans: 17 percent for people between 75-84, and 20 percent for people 85 and older. Lastly, according to the HRS, many retirees say that “not having enough income to get by” is a concern, with 41 percent of retirees saying that this “bothers or worries” them a lot. While 23 percent of retirees report working for pay since they retired, according to the 2015 EBRI study, the reasons people work in retirement vary, including that they enjoy working (83 percent) and want to stay active and involved (79 percent). Some other reasons include wanting money to buy extras (54 percent), needing money to make ends meet (52 percent), a decrease in the value of their savings or investments (38 percent), or keeping health insurance or other benefits (34 percent). We provided a draft of this report to the Department of Labor, the Department of the Treasury, and the Social Security Administration for review and comment. The Department of Labor provided technical comments, which we incorporated as appropriate. The Department of the Treasury and the Social Security Administration did not have comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Labor, the Secretary of the Treasury, and the Commissioner of Social Security, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix III. To analyze retirement savings and income for workers approaching retirement and for those of retirement age, we answered the following questions: 1. What financial resources do workers approaching retirement and current retirees have? 2. What evidence do studies and surveys provide about retirement security for workers and retirees? To describe the financial resources of near and current retirees, we examined financial information from the 2013 Survey of Consumer Finances (SCF). The SCF is a triennial survey of household assets and income from the Board of Governors of the Federal Reserve System (Federal Reserve). The 2013 SCF surveyed 6,026 U.S. households about their pensions, incomes, asset holdings and debts, and demographic information. The SCF is conducted using a dual-frame sample design. One part of the design is a standard, multistage area-probability design, while the second part is a special over-sample of relatively wealthy households. This is done in order to accurately capture financial information about the population at large as well as characteristics specific to the relatively wealthy. The two parts of the sample are adjusted for sample nonresponse and combined using weights to make estimates from the survey data representative of households overall. In addition, the SCF excludes people included in the Forbes magazine list of the 400 wealthiest people in the United States. Furthermore, the 2013 SCF dropped 11 observations from the public data set that had net worth at least equal to the minimum level needed to qualify for the Forbes list. We found the 2013 SCF to be reliable for the purposes of our report. While the SCF is a widely used federal data source, we conducted an assessment to ensure its reliability. Specifically, we reviewed related documentation and internal controls, spoke with agency officials, and conducted electronic testing. When we learned that particular estimates were not reliable for our purposes–such as estimates of future DB income–or had sample sizes too small to produce reliable estimates, we did not use them. Nonetheless, the SCF and other surveys that are based on self-reported data are subject to nonsampling error, including the inability to get information about all sample cases; difficulties of definition; differences in the interpretation of questions; respondents’ inability or unwillingness to provide correct information; and errors made in collecting, recording, coding, and processing data. These nonsampling errors can influence the accuracy of information presented in the report, although the magnitude of their effect is not known. Estimates from the SCF are also subject to some sampling error since the 2013 SCF sample is one of a large number of random samples that might have been drawn. Since each possible sample could have provided different estimates, we express our confidence in the precision of the sample results as 95 percent confidence intervals. These intervals would contain the actual population values for 95 percent of the samples that could have been drawn. In this report, we report percentage or other numerical estimates along with their 95 percent confidence intervals. Unless otherwise noted, all percentage estimates based on the SCF have 95 percent confidence intervals that are within 3 percentage points, and all numerical estimates other than percentages have 95 percent confidence intervals that are within 5 percent of the estimate itself. All financial figures reported using SCF data are in 2013 dollars and most are rounded to the nearest thousand dollars. Where possible, we relied on variable definitions used for Federal Reserve publications using the SCF. For example, we used the Federal Reserve’s variable for age, which is the age of the household head. We also used the Federal Reserve’s variable for retirement savings, which included assets accrued in defined contribution (DC) plans such as 401(k) plans as well as individual retirement accounts (IRA). We do not include the value of defined benefit (DB) plans, “traditional” pension plans that provide benefits based on a formula and typically pay lifetime benefits as an annuity unless a household has taken the benefit as a lump sum and converted it into an IRA or other account balance. Retirement savings also does not include savings held outside of a retirement account, which is included in financial assets as non-retirement savings. Similarly, we used other Federal Reserve variables to describe additional resources asked about in the SCF, such as home ownership, financial assets (including savings in and outside of a retirement account), debt, and net worth. This measure of net worth does not include the total value of anticipated DB plan or Social Security benefits, in part because it is difficult to determine the present value of these benefits. An important exception to our use of Federal Reserve variables is our estimation of household income: in order to separately estimate key components of retirement income, such as Social Security and DB plans, we developed our own variable for income while attempting to mirror the Federal Reserve’s income variable as closely as possible. We consulted with Federal Reserve staff to inform our calculations of Social Security and DB plan income. One limitation to these income calculations is that Social Security and DB plan income are for the respondent and his or her spouse/partner for 2013, whereas other income is reported for the entire family for 2012. However, we believe the estimates are reliable for our purposes. For example, 88 percent of households age 65 and older consist only of the respondent and his or her spouse/partner. Further, we conducted electronic testing and found no statistically significant difference between estimates of income using our variables and the Federal Reserve’s variables, either in aggregate or by various age groups. When describing the average share of household income from a particular source, we divided for each household the amount from that source by the household’s total income, and reported the average across all households. Income from DB plans includes traditional pensions with lifetime benefits and annuitized DC plans. In 2011, we found that few retirees with DC plans chose or purchased an annuity (GAO-11-400). the plan.payments that increase each year up to 3 percent, based on inflation. Annuities purchased through other channels may provide different levels of lifetime income. If a household purchased an annuity without inflation protection, the initial amount of income would be higher. Similarly, different assumptions about the interest rate would change the annuity amount. For example, the calculator currently uses an interest rate of two percent as of the calculation date, though a higher interest rate would increase the annuity amount. The Thrift Savings Plan offers an annuity with monthly Defining retirement for Americans is not without difficulty, as retirement is a nebulous concept and different people may define retirement for themselves differently. Self-defined retirees may work or not claim Social Security benefits, while people who do not identify as retired may claim Social Security benefits or not work. For the purpose of this report, we discuss households and workers nearing retirement age as 55-64 to isolate near retirees and determine retirement readiness, though some of this group may in fact be retired. We discuss the age group 65-74 to examine retirees in the first stage of retirement, although some members of this group may not be retired. Finally, we discuss the age group 75 and older, most of whom we expect to be retired. To analyze other evidence of retirement security, we reviewed several studies of retirement adequacy and compared and contrasted their methodologies and findings. These included academic studies based on formal models of optimal saving behavior and consumption patterns, those that projected savings levels in retirement based on recent savings data, and other reports examining the levels, adequacy, and sources of retirement wealth. We selected savings projections models that we had familiarity with from past GAO reports, and chose other studies and reports based on recommendations from internal and outside stakeholders. We also interviewed authors of studies and other retirement experts about retirement readiness. We also reviewed survey questions of retirees and workers approaching retirement age to infer information about their experiences of saving for and living in retirement. These surveys asked questions regarding financial well-being, confidence in being able to afford a comfortable retirement, and expectations of when and how people plan to retire contrasted with the actual experiences of current retirees. We analyzed the most recent available data from all of the surveys used as of April 2015. The University of Michigan’s Health and Retirement Study (HRS) is a longitudinal panel study that surveys a representative sample of approximately 26,000 Americans over the age of 50 every 2 years, with new cohorts being added to the sample every 6 years. The HRS also includes off-year studies to cover specific topics, like consumption, in depth. GAO used data from the 2012 core survey. As with all survey data, some statistical imprecision exists in the data that are presented in this report. The Federal Reserve’s 2013 Survey of Household Economics and Decisionmaking is a first-time survey conducted by the Federal Reserve to better understand the financial state of U.S. households. The survey was conducted by the Board’s Division of Consumer and Community Affairs in September 2013 using a nationally representative online survey panel. The survey was administered by GfK, an online consumer research company. It created a nationally representative probability- based sample by selecting respondents, adults 18 years and older, based on both random digit dialing and address-based sampling. A total of 4,134 surveys were fully completed. The data are weighted using the variables of gender, age, race/ethnicity, education, census region, residence in a metropolitan area, and access to the Internet. Demographic weighting targets are based on the Current Population Survey. As with all survey data, some statistical imprecision exists in the data that are presented in this report. Gallup conducts daily tracking of public opinion through the Gallup U.S. Daily. For the Gallup U.S. Daily, Gallup samples 3,500 respondents a week, 15,000 a month, and 175,000 a year. Surveys are conducted among U.S. adults ages 18 and older, using both landline and cell phone numbers. Each sample of national adults includes a minimum quota of 50 percent cell phone respondents and 50 percent landline respondents. The data are weighted by gender, age, race, Hispanic ethnicity, education, region, population density, and phone status. Demographic weighting targets are based on the Current Population Survey. Gallup samples landline and cell phone numbers using random-digit-dial methods. The results we reported on are based on the sub-sample of baby boomers, or 1,929 adults born from 1946 through 1964. The margin of sampling error is plus or minus 4 percentage points at the 95 percent confidence level. The 2015 Retirement Confidence Survey, conducted by the Employee Benefit Research Institute (EBRI) and Greenwald & Associates, is an annual survey on the views and attitudes of working-age and retired Americans regarding retirement, their preparations for retirement, their confidence with regard to various aspects of retirement, and related issues. The survey was conducted in January and February 2015 through 20-minute telephone interviews with 2,004 individuals (1,003 workers and 1,001 retirees) age 25 and older in the United States. Random-digit dialing was used to obtain a representative sample, as well as a cell phone supplement. All data are weighted by age, sex, and education to reflect the actual proportions in the adult population. The weighted samples of workers and retirees yield a statistical precision of plus or minus 3.5 percentage points, with 95 percent certainty, of what the results would be if all Americans age 25 and older were surveyed with complete accuracy. The 2013 Risks and Process of Retirement Survey, sponsored by the Society of Actuaries and prepared by Greenwald & Associates, is a survey intended to provide insights into how Americans decide to retire, how they perceive post-retirement risks, and how they manage financial resources in retirement. The survey was conducted online among Americans age 45-80 and included both pre-retirees and retirees at all income levels. A total of 2,000 interviews, half among pre-retirees and half among retirees, lasting an average of 20 minutes, were conducted using Research Now’s online consumer panel from August 19-28, 2013. The sample data are weighted by age, sex, and census region to the 2012 population estimates released by the Census Bureau. As with all survey data, some statistical imprecision exists in the data that are presented in this report. The official poverty rates and Supplemental Poverty Measures that we report come from the Census Bureau. The official poverty rate is sometimes used to determine eligibility for government programs and funding distributions. The Supplemental Poverty Measure is considered an experimental measure and serves as an additional indicator of economic well-being and provides a deeper understanding of economic conditions and policy effects. We reported on the poverty rates for older Americans, to indicate financial well-being. For all survey data used in this report, we reviewed methodological documentation and, when appropriate, interviewed individuals knowledgeable about the data and conducted electronic testing. Based on this, we found the data to be reliable for the purposes used in this report. Aon Hewitt, “The Real Deal: 2012 Retirement Income Adequacy at Large Companies – Highlights,” (2012), accessed April 8, 2015, http://www.aon.com/human-capital-consulting/thought- leadership/retirement/survey_2012_the-real-deal.jsp. Biggs, Andrew G. and Sylvester Schieber, “Is There a Retirement Crisis?” National Affairs, no. 20 (Summer 2014): 55-75. Brady, Peter, Kimberly Burham, and Sarah Holden. The Success of the U.S. Retirement System. Investment Company Institute. Washington, D.C.: December 2012. Favreault, Melissa M., Richard W. Johnson, Karen E. Smith, and Sheila R. Zedlewski, “Boomers’ Retirement Income Prospects.” Urban Institute. Brief no. 34 (February 2012). Hurd, Michael D. and Susan Rohwedder, “Economic Preparation for Retirement.” Investigations in the Economics of Aging, ed. David A. Wise. Chicago: University of Chicago Press (May 2012), 77-113. Munnell, Alicia H., Wengliang Hou, and Anthony Webb, “NRRI Update Shows Half Still Falling Short.” Center for Retirement Research at Boston College, Number 14-20. (December 2014). Rhee, Nari and Ilana Boivie, “The Continuing Retirement Savings Crisis.” National Institute on Retirement Security. Washington, D.C.: March 2015. Scholz, John Karl, Ananth Seshadri, and Surachai Khitatrakun, “Are Americans Saving ‘Optimally’ for Retirement?” Journal of Political Economy. vol. 114, no. 4. (2006): 607-643. VanDerhei, Jack, “Retirement Income Adequacy for Boomers and Gen Xers: Evidence from the 2012 EBRI Retirement Security Projection Model.” Employee Benefit Research Institute, Notes, vol. 33, no. 5 (May 2012). Michael Collins (Assistant Director), Mark Glickman, Shilpa Grover, and Laura Hoffrey made key contributions to this report. In addition, support was provided by James Bennett, Mitchell Karpman, Kathy Leslie, Sheila McCoy, Susan E. Offutt (GAO Chief Economist), Mark Ramage, Joseph Silvestri, Frank Todisco (GAO Chief Actuary), Walter Vance, Charles Willson, and Craig Winslow. Private Pensions: Participants Need Better Information When Offered Lump Sums That Replace Their Lifetime Benefits, GAO-15-74. Washington, D.C.: January 27, 2015. Retirement Security: Challenges for Those Claiming Social Security Benefits Early and New Health Coverage Options, GAO-14-311. Washington, D.C.: April 23, 2014. Retirement Security: Trends in Marriage and Work Patterns May Increase Economic Vulnerability for Some Retirees, GAO-14-33. Washington, D.C.: January 15, 2014. 401(k) Plans: Other Countries’ Experiences Offer Lessons in Policies and Oversight of Spend-down Options, GAO-14-9. Washington, D.C.: November 20, 2013. Automatic IRAs: Lower-Earning Households Could Realize Increases in Retirement Income, GAO-13-699. Washington, D.C.: August 23, 2013. 401(k) Plans: Labor and IRS Could Improve the Rollover Process for Participants, GAO-13-30. Washington, D.C.: March 7, 2013. Retirement Security: Annuities with Guaranteed Lifetime Withdrawals Have Both Benefits and Risks, but Regulation Varies across States, GAO-13-75. Washington, D.C.: December 10, 2012. Retirement Security: Women Still Face Challenges, GAO-12-699. Washington, D.C.: July 19, 2012. Unemployed Older Workers: Many Experience Challenges Regaining Employment and Face Reduced Retirement Security, GAO-12-445. Washington, D.C.: April 25, 2012. Retirement Income: Ensuring Income throughout Retirement Requires Difficult Choices, GAO-11-400. Washington, D.C.: June 7, 2011. Private Pensions: Some Key Features Lead to an Uneven Distribution of Benefits, GAO-11-333. Washington, D.C.: March 30, 2011.
As baby boomers move into retirement each year, the Census Bureau projects that the age 65-and-older population will grow over 50 percent between 2015 and 2030. Several issues call attention to the retirement security of this sizeable population, including a shift in private-sector pension coverage from defined benefit plans to defined contribution plans, longer life expectancies, and uncertainty about Social Security's long-term financial condition. In light of these developments, GAO was asked to review the financial status of workers approaching retirement and of current retirees. GAO examined 1) the financial resources of workers approaching retirement and retirees and 2) the evidence that studies and surveys provide about retirement security for workers and retirees. To conduct this work, GAO analyzed household financial data, including retirement savings and income, from the Federal Reserve's 2013 Survey of Consumer Finances, reviewed academic studies of retirement savings adequacy, analyzed retirement-related questions from surveys, and interviewed retirement experts about retirement readiness. GAO found the data to be reliable for the purposes used in this report. GAO received technical comments on a draft of this report from the Department of Labor and incorporated them as appropriate. Many retirees and workers approaching retirement have limited financial resources. About half of households age 55 and older have no retirement savings (such as in a 401(k) plan or an IRA). According to GAO's analysis of the 2013 Survey of Consumer Finances, many older households without retirement savings have few other resources, such as a defined benefit (DB) plan or nonretirement savings, to draw on in retirement (see figure below). For example, among households age 55 and older, about 29 percent have neither retirement savings nor a DB plan, which typically provides a monthly payment for life. Households that have retirement savings generally have other resources to draw on, such as non-retirement savings and DB plans. Among those with some retirement savings, the median amount of those savings is about $104,000 for households age 55-64 and $148,000 for households age 65-74, equivalent to an inflation-protected annuity of $310 and $649 per month, respectively. Social Security provides most of the income for about half of households age 65 and older. Studies and surveys GAO reviewed provide mixed evidence about the adequacy of retirement savings. Studies range widely in their conclusions about the degree to which Americans are likely to maintain their pre-retirement standard of living in retirement, largely because of different assumptions about how much income this goal requires. The studies generally found about one-third to two-thirds of workers are at risk of falling short of this target. In surveys, compared to current retirees, workers age 55 and older expect to retire later and a higher percentage plan to work during retirement. However, one survey found that about half of retirees said they retired earlier than planned due to health problems, changes at their workplace, or other factors, suggesting that many workers may be overestimating their future retirement income and savings. Surveys have also found that people age 55-64 are less confident about their finances in retirement than those who are age 65 or older.
Strategic human capital management, and specifically the need to develop results-oriented organizational cultures, is receiving increased attention across the federal government. The Congress has underscored the consequences of human capital weaknesses through a wide range of oversight hearings held over the last few years. In addition, to foster a results-oriented culture in federal agencies, the Congress is considering legislative proposals to, among other things, focus attention on the impact poor performance can have on the effectiveness of an organization and require agencies to have a chief human capital officer to select, develop, and manage a productive, high-quality workforce. The President’s Management Agenda, released in August 2001, identified human capital as one of the five key governmentwide management challenges currently facing the federal government. Subsequently, the Office of Management and Budget and OPM developed criteria that recognized the importance of creating a performance culture that appraises and rewards employees based on their contributions to organizational goals as a key dimension of effective human capital management. We developed a model of strategic human capital management to highlight the kinds of thinking that agencies should apply, as well as some of the steps they can take, to make progress in managing human capital strategically. The model consists of eight critical success factors, which are organized to correspond with four cornerstones of effective strategic human capital management: (1) leadership, (2) strategic human capital planning, (3) acquiring, developing, and retaining talent, and (4) results- oriented organizational cultures. Within the cornerstone of results-oriented organizational cultures, a critical success factor is linking unit and individual performance to organizational goals. One way to reinforce accountability and alignment of individual performance expectations with organizational goals is through the use of results-oriented performance agreements. We have reported that other countries have begun to use their performance management systems as a strategic tool to help achieve results. In particular, they use performance agreements to align and cascade organizational goals to individual performance expectations through several levels in their organizations. They also use performance agreements to help identify the crosscutting connections both within and between agencies and align the performance commitments of top-level executives with broader governmentwide priorities. Further, our work has shown that U.S. agencies have benefited from their use of results-oriented performance agreements for political and senior career executives. Although each agency developed and implemented performance agreements that reflected its specific organizational priorities, structures, and cultures, the performance agreements met the following characteristics. They strengthened alignment of results-oriented goals with daily operations, fostered collaboration across organizational boundaries, enhanced opportunities to discuss and routinely use performance information to make program improvements, provided a results-oriented basis for individual accountability, and maintained continuity of program goals during leadership transitions. Prior to OPM amending its regulations on senior executive performance management systems, BLM, FHWA, IRS, and VBA implemented systems that used a set of balanced expectations to manage senior executive performance. BLM implemented a balanced approach to manage its senior executive performance to focus attention and accountability on organizational priorities, make resource allocations, and minimize employee frustration. BLM incorporated performance elements in senior executives’ individual performance plans for the rating year ending June 2000 that were structured around its strategic goals to (1) “Restore and Maintain the Health of the Land,” (2) “Serve Current and Future Publics,” and (3) “Improve Organizational Effectiveness.” BLM also included a performance element in the senior executives’ plans to “Improve Human Resources Management and Quality of Work Life.” (For more information on BLM’s senior executive performance plans, see app. II.) FHWA implemented a balanced approach to managing its senior executive performance in response to its 1999 employee satisfaction survey. Specifically, the majority of employees that responded indicated that they did not understand their workgroup’s role in implementing FHWA’s corporate management strategies that were based on the Malcolm Baldridge National Quality Award and the Presidential Quality Award Criteria—leadership, strategic planning, customer and partner focus, information and analysis, human resource development and management, process management, and business results. Beginning in fiscal year 2000, FHWA appraised senior executives on these corporate management strategies. (For more information on FHWA’s senior executive performance plans, see app. III.) In response to the Internal Revenue Service Restructuring and Reform Act of 1998, IRS initiated a method of measuring performance designed to foster quality service, promote compliance with the tax laws, and consider the impact on employees. In fiscal year 2000, IRS implemented a senior executive performance management system that aligned the executives’ performance expectations with a set of balanced expectations consisting of employee satisfaction, customer satisfaction, and business results, and with two additional areas of responsibility—leadership and equal employment opportunity. (For more information on IRS’s senior executive performance plans, see app. IV.) VBA adopted a balanced scorecard approach in fiscal year 1999 as a strategic management tool to drive organizational change, provide feedback to employees on measures they can influence, link performance appraisal and reward systems to performance measures, and provide incentives to managers to work as teams in meeting performance measures. Its scorecard included measures for accuracy, speed and timeliness, unit cost, customer satisfaction, and employee development and satisfaction. VBA incorporated these measures in the performance appraisals for senior executives in its regional offices where the majority of senior executives are located. (For more information on VBA’s senior executive performance plans, see app. V.) Effective performance management systems translate organizational priorities and goals into direct and specific commitments that senior executives will be expected to achieve during the year. To this end, BLM, FHWA, IRS, and VBA developed a set of expectations for senior executive performance that were intended to balance organizational results, customer satisfaction, and employee perspectives and offered a menu of expectations for senior executives to incorporate into their individual performance plans. They appraised senior executives’ contributions to organizational results by the core competencies and supporting behaviors senior executives followed or the targets they met. In addition, the agencies appraised senior executives’ performance against their expectations for customer satisfaction and employee perspectives. OPM’s regulations emphasize holding senior executives accountable for their individual and organizational performance by linking individual performance management with results-oriented organizational goals. To appraise senior executive contributions to organizational results, BLM, FHWA, IRS, and VBA identified core competencies and supporting behaviors for senior executives to follow, while VBA also identified targets for senior executives to meet that are directly linked to organizational results, as shown in table 1. Core competencies and supporting behaviors: The agencies identified core competencies and supporting behaviors for senior executives to follow that are intended to contribute to their agencies’ achievement of performance goals. For example, FHWA set a performance expectation for senior executives to develop strategies to achieve FHWA’s strategic objectives and performance goals. To help meet this expectation, the Director of Field Services-South convened the “Southern Executive Safety Summit” in 2000 to address the region’s highway fatality rates—the highest in the nation— and their impact on FHWA achieving its goal on safety. The participants, including state and federal transportation and safety officials from the region, learned what each state was doing to decrease fatality rates and discussed how to create new safety strategies for each state and the region as a whole. Following the summit, Kentucky, North Carolina, and Mississippi held subsequent state safety summits and pursued numerous initiatives to reduce fatalities. The senior executive reported in his self- assessment for fiscal year 2001 that many states in the region have experienced a reduction in the number of highway fatalities since the Southern Executive Safety Summit, which is helping FHWA meet its goal of reducing the number of highway-related fatalities by 20 percent in 10 years. Similarly, to address IRS’s performance expectation for senior executives to develop and execute plans to achieve organizational goals, a senior executive who is the area director for compliance in New York has a performance expectation in his fiscal year 2002 individual performance plan to ensure that taxpayers affected by the events of September 11, 2001, are treated and audited according to their circumstances, and that the compliance guidelines and policy regarding affected taxpayers are adhered to. In particular, these taxpayers—including individuals and businesses— were not to be audited for prior tax years before the end of March 2002, if such an audit was necessary. To contribute to its strategic goal to restore and maintain the health of the land, BLM set an expectation for senior executives to understand and plan for the condition and use of public lands. In particular, the senior executive who heads the Colorado state office had a performance expectation in her individual performance plan for the 2001 performance appraisal cycle to conduct land use assessments and complete plans as scheduled for the Gunnison Gorge National Conservation Area. In her self-assessment for the 2001 performance appraisal cycle, she stated that she began conducting land use assessments for Gunnison Gorge and approved “pre-plans,” which outline the anticipated schedule, budget, and stakeholder involvement to complete a land use plan. Targets directly linked to organizational results: VBA identified targets with specific levels of performance for senior executives to meet. These targets link to the priorities in VBA’s balanced scorecard and the Department of Veterans Affairs’ (VA) strategic goals. For example, to contribute to VA’s strategic goal to “provide ‘One VA’ world class service to veterans and their families through the effective management of people, technology, processes and financial resources” and to address its priority of accuracy, VBA set a national target of 72 percent for fiscal year 2001 for the accuracy rate of original and reopened compensation and pension claims and appeals that were completed and determined to be technically accurate. To contribute to that national target, the senior executive in the Nashville regional office had a performance expectation for his office to meet a target accuracy rate of 59.2 percent. Similarly, to further contribute to VA’s strategic goal of world-class service and to address its priority of speed and timeliness, VBA set a national target for property holding time—the average number of months from date of acquisition to date of sale of properties acquired due to defaults on VA guaranteed loans—of 10 months for fiscal year 2001. To contribute to the national target, the same senior executive had a performance expectation for his office to meet a target of 8.6 months. OPM’s regulations recognize that senior executives in public sector organizations face the challenging task of balancing the needs of multiple customers, who at times may have differing or ever competing expectations. Customer involvement is important to first make senior executives aware of differing or competing expectations and to then build partnerships and coalitions to reach mutual understanding of the issues. To this end, BLM, FHWA, IRS, and VBA set expectations for senior executives to address customer satisfaction in their individual performance plans and appraised their performance on the basis of partnerships, customer feedback, and improved products and services. Examples of the agencies’ expectations for customer satisfaction are shown in table 2. Partnerships: Partnerships and coalitions can help senior executives work collaboratively with their customers to ensure that the organization takes into account their multiple interests and achieves results. BLM’s senior executives have relied on resource advisory councils (RAC) consisting of local residents with diverse interests as a way to involve customers, identify issues, and reach a reasonable degree of consensus regarding BLM’s land management programs. To meet BLM’s expectation to establish cooperative and constructive relationships that facilitate input from a range of stakeholders, the senior executive who heads the Montana state office set an expectation to expand partnerships and maintain close working relationships with national interest groups in his individual plan for the 2001 performance appraisal cycle. This senior executive solicited feedback from the Central Montana RAC to discuss among his customers how to balance the ongoing, yet potentially competing uses—including recreation, grazing, and oil and gas leases—of a 150-mile stretch of the Missouri River and surrounding areas. According to the senior executive, the RAC recommended that ongoing uses continue, but that this stretch receive special protection from further development. In his self-assessment for the 2001 performance appraisal cycle, the senior executive stated that he continues to use the RAC as a highly effective citizen advisory group that plays a significant role in land management deliberations. Customer feedback: Customer feedback can help senior executives determine customers’ needs and their levels of satisfaction with existing products and services. To hold its senior executives accountable for customer satisfaction, senior executives in VBA’s regional offices had performance expectations to meet targets for veterans giving a high rating on satisfaction surveys. Specifically, the senior executive in the Nashville regional office had a target in fiscal year 2001 to attain 85 percent in overall satisfaction in a national survey of customers using vocational rehabilitation and employment services and support. In addition, to address his performance expectation for customer satisfaction, the senior executive who heads VBA’s Waco regional office convened frequent “town hall” meetings to listen to veterans’ needs and discuss VBA issues, such as legislative changes that affect the processing of veterans’ claims. According to this executive, the town hall meetings helped improve his customer satisfaction levels because veterans identified the concerns that were most important to them, gained direct access to the VBA employees working on their benefit claims, and were better able to understand the claims process. Specifically, the senior executive reported in his self-assessment that during fiscal year 2001 he worked with local service officers to identify in advance those veterans planning to attend the town hall meetings, had their claims folders available for review at the meetings, and was thus able to enhance outreach programs. Improved products and services: Senior executives can use the feedback from customers to enhance the customers’ understanding of the organization and make improvements in the organization’s products and services. For example, to meet IRS’s performance expectation for senior executives to address customer satisfaction by continuously improving products and services, a senior executive responsible for submission processing and taxpayer assistance had a performance expectation in her fiscal year 2001 individual performance plan to develop a communication plan. This plan was intended to better serve customers by helping improve their knowledge and understanding of the tax return process. To hold its senior executives accountable for improved products and services, VBA set targets for executives to achieve, such as the abandoned telephone call rate—the percentage of callers who get through to VBA, but are put on hold and hang up before being connected to an employee. Specifically, for fiscal year 2001, the senior executive in the Nashville regional office had a target for his office for an abandoned telephone call rate of not more than 5 percent for customers’ inquiries of VBA’s benefit programs, such as compensation and pension services. OPM’s regulations recognize that an agency’s people are vital assets and people achieve organizational goals and results. Accordingly, the regulations call for senior executive performance plans and appraisals to contain performance expectations on employees’ perspectives. To this end, BLM, FHWA, IRS, and VBA set expectations for senior executives to address employee perspectives in their individual performance plans and appraised their performance on the basis of the training provided to staff, safe and healthy work environment, teamwork, employee satisfaction, and fairness and diversity. Examples of the agencies’ expectations for employee perspectives are shown in table 3. Training: Senior executives can provide employees with the necessary training and continuous developmental opportunities to perform their jobs more effectively. To address VBA’s performance expectation for senior executives to ensure that plans exist and are adequately implemented to recruit, train, retain, motivate, empower, and advance employees, the senior executive in VBA’s Manila, Philippines, Regional Office and Outpatient Clinic conducted focus groups to identify actions needed to respond to the results of the 1999 employee survey. One action was to task a training committee to develop and implement a Training Needs Assessment tool to determine employees’ training needs and to schedule training for fiscal year 2002. The senior executive stated in his self- assessment for fiscal year 2001 that the employees and their supervisors used the assessment tool to establish individual development plans and the training committee has been scheduling training sessions to ensure that individual development plans are met. To meet BLM’s expectation for senior executives to help attract and retain well-qualified employees, the senior executive who heads BLM’s Nevada state office set a performance expectation for the 2001 performance appraisal cycle to maintain a trained and motivated workforce. This executive worked with his Human Resources Development Committee, composed of representatives from the eight BLM field offices in Nevada. The committee meets regularly to identify employee issues, make recommendations, and implement actions. Specifically, with input from the committee, the senior executive developed a Statewide Mentoring Program to enhance and promote opportunities for employees’ skill development and to assist them in achieving their career goals. The senior executive did not discuss the mentoring program in his self-assessment for the 2001 performance appraisal cycle, but generally stated that his office provided training to enhance leadership and interpersonal skills. Safe and healthy work environment: Senior executives can provide employees with safe, secure, and healthful work conditions to ensure that the workspace is conducive to effective performance. To address VBA’s expectation for senior executives to provide a safe, healthy work environment in fiscal year 2001, the senior executive who heads VBA’s Manila, Philippines, Regional Office and Outpatient Clinic worked with employees to improve the security and safety of the regional office. Specifically, to prepare the office in case suspicious materials are received, the senior executive reviewed and updated its emergency evacuation plan and then met with employees to ensure they understood the plan’s procedures and were comfortable with their responsibilities. In addition, he worked with the Regional Security Office to provide security awareness training to employees and held several emergency drills to test employees’ responses. He stated in his self-assessment for fiscal year 2001 that while employees were still concerned with security, he believed confidence in their safety and welfare had improved. Teamwork: Senior executives can encourage a teams-based approach to help improve employee morale and job satisfaction by creating an environment that is open to communication and has a sense of shared responsibility for accomplishing organizational goals. To create an environment in which knowledge is managed, shared, and used effectively, FHWA encourages its senior executives to use organizational self- assessments to solicit employee perspectives and gauge their employees’ work environment. FHWA provides sample questions for these self- assessments that are based on the Malcolm Baldridge criteria. For example, the senior executive heading the Office of Information and Management Services required each of her three divisions to complete an organizational self-assessment in 2001. FHWA employees trained in the Baldridge criteria facilitated the half-day sessions for each division. As a result of the sessions, the office consolidated the three divisions’ self- assessments and summarized the office’s “strengths” and “opportunities for improvement” in a report. The report identified one of the office’s strengths to be management’s support and approval for training, and one of its opportunities for improvement to be keeping employees’ individual development plans up to date. In response, the senior executive identified in her individual performance plan a specific expectation of updating individual development plans for every employee by April 30, 2002. To meet IRS’s performance expectation for senior executives to motivate employees to achieve high performance through open and honest communication and involve them in decision making, a senior executive who is the area director for compliance in New York included an expectation in his fiscal year 2001 individual performance plan to look for partnering opportunities to maximize problem resolution and employee involvement, while developing and maintaining effective relationships with the seven National Treasury Employees Union chapters in his area. Employee satisfaction: Senior executives can monitor employees’ satisfaction with their work environment to gauge if they feel empowered and motivated to contribute to organizational goals. For senior executives in the regional offices, VBA set a target for employee satisfaction that senior executives were to achieve for fiscal year 2001. Based on a 1-to-5 scale, the target was set by estimating the average response on two questions from the employee satisfaction survey. The two questions ask about the employee’s satisfaction with his or her job and the employee’s overall satisfaction with the organization. For example, VBA set a national target score of 3.6 for employee satisfaction in the compensation and pension services business line in fiscal year 2001. All regional offices contribute to the target for this business line. Specifically, the senior executive in the Nashville regional office had a performance expectation for his office to meet a target score of 3.5 for employee satisfaction. Fairness and diversity: Senior executives can foster fairness and diversity by protecting the rights of all employees, providing a fair dispute resolution system, and working to prevent discrimination through equality of employment and opportunity. To meet BLM’s performance expectation for senior executives to establish a zero tolerance standard for discrimination, harassment, and hostile work environments, a senior executive who heads BLM’s Nevada state office set an expectation in his individual plan for the 2001 performance appraisal cycle that he would demonstrate commitment to nondiscrimination in the workplace by ensuring fair access to developmental opportunities for employees. While the four agencies tailored their performance management systems to fit their organizational and operational needs, we identified an initial set of implementation approaches that BLM, FHWA, IRS, and VBA are taking that may be helpful to other agencies as they manage senior executive performance against balanced expectations. BLM, FHWA, IRS, and VBA require follow-up actions, and make meaningful distinctions in performance. Providing objective data for organizational results, customer satisfaction, and employee perspectives can help senior executives manage during the year, identify performance gaps, pinpoint improvement opportunities, and compare their performance to other executives. Specifically, the agencies developed data systems so that senior executives can track their individual performance against organizational results, and disaggregated customer and employee satisfaction survey data. Developed data systems: To help senior executives see how they are contributing to organizational results during the year, BLM and VBA developed data systems for executives to use to track their individual performance against organizational results. For example, BLM’s Director’s Tracking System collects and makes available on a real-time basis data on each senior executive’s progress in their state offices towards BLM’s national priorities and the resources expended on each priority. In particular, a BLM senior executive in headquarters responsible for the wild horse and burro adoptions program can use the tracking system to identify where the senior executives in the state offices are against their targets and what the program costs have been by state. Specifically, as of mid-June 2002, the BLM state director in California had completed 532 adoptions at a total cost of $460,000 towards his target of 1,150 adoptions for fiscal year 2002. Similarly, the state director in Montana had completed 46 adoptions at a total cost of $63,000 towards his target of 300 adoptions. VBA also developed a data system that tracks organizational and individual performance. Its balanced scorecard data are updated monthly and senior executives and other employees can access the data through the agency’s Intranet. The balanced scorecard compares actual performance against the targets set for the national and regional office levels. According to VBA officials, the scorecard helps employees understand how they can affect the results of the organization. Senior executives refer to the balanced scorecard data at their leadership meetings, discuss how they performed relative to the scorecard, and identify the causes behind outstanding and poor performance. Disaggregated survey data: Specific customer and employee feedback helps senior executives pinpoint actions to improve products and services for customers and to enhance employee satisfaction. BLM, FHWA, IRS, and VBA disaggregated the data from agencywide customer and employee satisfaction surveys so that the results were applicable to a senior executive’s customers and employees. For example, from its Use Authorization Survey administered to its various customers in fiscal year 2000, BLM disaggregated the survey data to provide the applicable results to individual senior executives who head the state offices. Specifically, the senior executive in the Montana state office received data for his state showing that 81 percent of the grazing permit customers surveyed gave a favorable rating for the timeliness of permit processing and for service quality. In his self-assessment for the 2001 performance appraisal cycle, he stated that issuing grazing permits has progressed without any problem or backlogs and that permittees have not experienced any delays. VBA disaggregates its survey results to the regional offices and policy and program support offices that are larger than 15 employees in order to allow the senior executives to determine actions that are appropriate for their offices. In 2001, VBA administered its most recent employee survey to measure aspects of organizational climate related to high performance. For each question on the survey, VBA provided the office results and the VBA average, as well as baseline data from surveys conducted in 1997 and 1999. For example, 47 percent of the employees surveyed in the St. Paul regional office either strongly agreed or agreed that managers provided an environment that supports employee involvement, contributions, and teamwork. According to the 2001 survey results, this percentage is slightly higher than the VBA average of 43 percent and indicated an improvement from the 33 percent the office scored on this question in both the 1997 and 1999 employee surveys. VBA compiles a national report of the results so that senior executives can compare how their office scored against other offices and VBA as a whole. IRS disaggregates data to the workgroup level from its IRS/National Treasury Employees Union Employee Satisfaction Survey, which measures general satisfaction with IRS, the workplace, and the union. The Gallup Organization administers this survey to all employees, which is comprised of Gallup’s 12 questions (“Q12”); additional questions unique to IRS, such as views on local union chapters and employee organizations; as well as questions on issues IRS has been tracking over time. Gallup provides the results for each workgroup. For example, a senior executive can compare how his workgroup performed to other operating divisions and to IRS as a whole. Specifically, one senior executive’s workgroup scored 3.68 out of a possible 5 for “having the materials and equipment they need to do their work right” compared to the IRS-wide score of 3.58 on the survey. To allow senior executives and managers to benchmark externally, Gallup compares each workgroup’s results to the 50th (median) and 75th (best practices) percentile scores from Gallup’s Q-12 database. To benchmark internally, IRS provides the servicewide results from the previous year’s survey in each workgroup report. As part of its senior executive performance management system, IRS and FHWA require their senior executives to follow up on customer and employee issues. To improve customer satisfaction, the Commissioner of Internal Revenue set an expectation that the business units, headed by senior executives, develop action plans based on customer survey data that are relevant to the needs of their particular customers. IRS provided guidance to senior executives and managers to help them understand and interpret the customer survey data, identify areas for improvement, and develop action plans to respond to customers’ issues and concerns. For example, to address the customer satisfaction expectation in his fiscal year 2002 individual performance plan, an IRS senior executive who is the area director for compliance in Laguna Niguel, California, requires each of his territory managers to present an action plan identifying ways to improve low scores from customer surveys. He then rolls up these managers’ plans into a consolidated area action plan for which he is responsible. Specifically, an expectation in his action plan is to improve how customers are treated during collection and examination activities by ensuring that examiners explain to customers their taxpayer rights, as well as why they were selected for examination and what they could expect. Further, the senior executive plans to ensure that territory managers solicit feedback from customers on their treatment during these activities and identify specific reasons for any customer dissatisfaction. In his midyear self-assessment for fiscal year 2002, the senior executive stated that substantial progress is being made in achieving the collection and examination customer satisfaction goals. Similarly, to address employee perspectives, IRS requires senior executives to hold workgroup meetings with their employees to discuss the workgroups’ Employee Satisfaction Survey results and develop action plans to address these results. According to a senior executive in IRS’s criminal investigation unit, the workgroup meetings were beneficial because they increased communication with employees and identified improvements in the quality of worklife. For example, through the workgroup meetings, employees identified the need for recruiting supervisory special agents to even out some of the workload. Subsequently, the senior executive set an expectation in his fiscal year 2002 individual performance plan to ensure that the field office has a strong recruitment program to attract viable candidates. He also has an expectation to ensure his field offices hold timely workgroup meetings and develop and implement action plans to address concerns identified during these meetings. To reinforce the importance of follow-up action, IRS developed a Web- based database system to track workgroup issues across IRS. According to an IRS official, the system is being upgraded to improve its usefulness for senior executives and will allow them to track their progress in completing the actions identified in the workgroup meetings. In addition, all employees will be able to access summary information to help identify trends in the data across workgroups. The system will also provide employees with the opportunity to share best practice information on resolved workgroup issues. To help meet their employee perspective performance expectations, FHWA requires senior executives to use 360-degree feedback instruments to solicit employee views on their leadership skills. Based on the 360-degree feedback, senior executives are to identify action items and incorporate them into their individual performance plans for the next fiscal year. FHWA piloted the 360-degree feedback instrument for half its leadership team of senior executives in fiscal year 2001 and scheduled the rest for fiscal year 2002. The 360-degree feedback process is designed to provide an executive direct input from various sources—peers, customers, and subordinates— and to compare those results to a self-evaluation and input from a supervisor. While the 360-degree feedback instrument is intended for developmental purposes to help senior executives identify areas for improvement and is not included in the executive’s performance evaluation, executives are held accountable for taking some action with the 360-degree feedback results and responding to the concern of their peers, customers, and subordinates. For example, based on 360-degree feedback, a senior executive for field services identified better communications with subordinates and increased collaboration among colleagues as areas for improvement, and as required, he then incorporated action items into his individual performance plan. In fiscal year 2001, he set a performance expectation to develop a leadership self-improvement action plan and identify appropriate improvement goals. In his self-assessment for fiscal year 2001, he reported that he improved his personal contact and attention to the division offices as evidenced by a 30 percent increase in visits to the divisions that year. Also, he stated that he encouraged his subordinates to assess their leadership skills. Consequently, 9 of his 11 subordinates are using 360-degree feedback instruments to improve their personal leadership competencies. According to OPM, the amended regulations were designed to recognize that effective performance management requires agency leadership to make meaningful distinctions between acceptable and outstanding performance of senior executives and to appropriately reward those who perform at the highest level. Effective performance management systems provide agencies with the objective and fact-based information they need to distinguish levels of performance among senior executives and serve as a basis for bonus recommendations. OPM data on senior executive performance ratings indicate that agencies across the federal government are not making meaningful distinctions among senior executives’ performance. Specifically, agencies rated about 85 percent and 82 percent of senior executives at the highest level their systems permit in their performance ratings in fiscal years 2000 and 2001, respectively. Nearly all of the senior executives are rated using three- and five-level rating systems with the majority of senior executives rated under five-level systems. When disaggregating the data by rating system, the percentage of senior executives that received the highest level rating under five-level systems was approximately 77 and 75 percent in fiscal years 2000 and 2001, respectively. In the same period, the percent of senior executives receiving the highest level rating under three-level systems was about 99 percent. In addition, OPM data show that, governmentwide, approximately 52 percent of senior executives received bonuses each year since fiscal year 1999. Between fiscal years 1999 and 2001, the average bonus payment increased from about $10,200 to $12,300. OPM officials told us that they plan to closely monitor the distribution of fiscal year 2002 performance ratings and bonuses. IRS, FHWA, VBA, and BLM recognize that they are still working at implementing effective performance management systems that make meaningful distinctions in senior executive performance. For example, IRS established an executive compensation plan for determining base salary, performance bonuses, and other awards for its senior executives that is intended to explicitly link individual performance to organizational performance and is designed to emphasize performance. To recognize performance across different levels of responsibilities and commitments, IRS assigns senior executives to one of three bonus levels at the beginning of the performance appraisal cycle. Assignments depend on the senior executives’ responsibilities and commitments in their individual performance plans for the year, as well as the scope of their work and its impact on IRS’s overall mission and goals. For example, the Commissioner of Internal Revenue or Deputy Commissioner assigns senior executives to bonus level three—considered to be the level with the highest responsibilities and commitments—only if they are a part of the Senior Leadership Team. IRS restricts the number of senior executives assigned to each bonus level for each business unit. In addition, for each bonus level, IRS establishes set bonus ranges by individual summary evaluation rating, which is intended to reinforce the link between performance and rewards. The bonus levels and corresponding bonus amounts of base salary by summary rating are shown in table 4. To help ensure realistic and consistent performance ratings, each IRS business unit has a “point budget” for assigning performance ratings that is the total of four points for each senior executive in the unit. After the initial summary evaluation ratings are assigned, the senior executives’ ratings are converted into points—an “outstanding” rating converts to six points; an “exceeded” to four points, which is the baseline; a “met” to two points; and a “not met” to zero points. If the business unit exceeds its point budget, it has the opportunity to request additional points from the Deputy Commissioner. IRS officials indicated that none of the business units requested additional points for the fiscal year 2001 ratings. IRS piloted the compensation plan in fiscal year 2000 with the top senior executives that report to the Commissioner of Internal Revenue and used it for all senior executives in fiscal year 2001. For fiscal year 2001, 31 percent of the senior executives received a rating of outstanding compared to 42 percent for fiscal year 2000, 49 percent received a rating of exceeded expectations compared to 55 percent, and 20 percent received a rating of met expectations compared to 3 percent. In fiscal year 2001, 52 percent of senior executives received a bonus, compared to 56 percent in fiscal year 2000. IRS officials indicated that they are still gaining experience using the new compensation plan and will wait to establish trend data before they evaluate the link between performance and bonus decisions. FHWA weights the elements it uses to appraise senior executive performance to make meaningful distinctions among its senior executives. These elements include (1) strategic and performance plan accomplishments and corporate management improvements and results and (2) job significance and complexity. The senior executives receive a score totaling 100 points, with a maximum of 70 points for strategic and performance plan accomplishments and corporate management improvements and results, and a maximum of 30 points for job significance and complexity. FHWA provides definitions for assigning points. For example, to receive all 70 points for strategic and performance plan accomplishments, the executive must achieve all the performance expectations identified in the individual performance plan, including exceptional advancement on the corporate management strategies. To receive all 30 points for job significance and complexity, the executive must have a position that is highly visible, with a high degree of difficulty due to legislation, court decisions, political pressures, and other factors. Rating officials use these scores in assigning a rating to senior executives of “achieved results,” “minimally satisfactory,” or “unsatisfactory.” In fiscal year 2001 and 2000, all 45 senior executives received a rating of achieved results. FHWA recommended 20 of the 45 senior executives (44 percent) receive bonuses in fiscal year 2001 and 22 of the 45 executives (49 percent) in fiscal year 2000. For both years, each senior executive recommended for a bonus received one. For VBA, a task force was established in April 2001 to review VBA’s claims processing. It found that 82 percent of VBA’s senior managers were recommended to receive either a performance bonus or an increase in senior executive rank in 2000 when performance for the organization as a whole was considerably below program goals and performance varied among regional offices. Stating that there must be appropriate rewards for outstanding performance and negative consequences for those who do not perform according to their performance agreement, the task force recommended that detailed performance agreements be incorporated into the performance standards for the senior executives in the regional offices. Following VA guidance for bonuses in fiscal year 2001, senior executives in VBA receive bonuses by demonstrating significant individual and organizational achievements during the performance appraisal year as evidenced by clearly documented, specific executive achievements, such as substantive improvements in the quality of work or significant cost reductions. In fiscal year 2001, 50 percent of the senior executives in VBA received a bonus, with 24 of the 50 executives receiving the highest performance rating of “outstanding.” BLM appraises senior executives’ performance and recommends them for performance awards based on their achievement of the performance elements in their individual performance plans and the executives’ demonstration of leadership excellence. BLM rates its senior executives’ performance as “pass,” “provisional,” or “fail.” Senior executives receive a pass rating if they fulfill the fully successful standards for the performance elements in their performance plans. All of the senior executives received a pass rating in the 2000 and 2001 performance appraisal cycles. For the 2000 and 2001 performance appraisal cycles, the Department of the Interior guidance limited BLM’s total number of senior executive nominations for performance awards, including the Secretary’s Executive Leadership Award, performance bonuses, or pay rate increases, to no more than 45 percent or 9 of its career senior executives as of the end of the appraisal cycles. Of BLM’s 17 rated career senior executives, 4 received performance bonuses, 3 received pay rate increases, and 1 received the Secretary’s Executive Leadership Award in 2000. In 2001, of BLM’s 19 rated career senior executives, 5 received performance bonuses and 4 received pay rate increases. Leading organizations use their performance management systems to achieve results, accelerate change, and facilitate communication throughout the year so that discussions about individual and organizational performance are integrated and ongoing. Toward this end, BLM, FHWA, IRS, and VBA are in the early stages of implementing their new performance management systems for senior executives. In particular, while these agencies identified core competencies and supporting behaviors for senior executives to follow that are intended to contribute to results, they identified to a much lesser extent targets for senior executives to meet that are directly linked to organizational goals. In addition, they identified expectations for senior executive performance for customer satisfaction and employee perspectives. These agencies have taken the first steps in creating a performance management system for senior executives that is a strategic tool for holding individuals accountable for their contributions to results and organizational success. Their initial implementation approaches to manage senior executives’ performance recognize the importance of providing useful data so that executives can track their individual performance against organizational results on a real-time basis and the benefit of requiring follow-up action on customer and employee issues through workgroup meetings and action plans. However, these agencies also acknowledge that they are still working at implementing effective systems that can make meaningful distinctions in performance. There are significant opportunities to strengthen these efforts as they move forward in holding senior executives accountable for results. In particular, more progress is needed in explicitly linking senior executive expectations for performance to results-oriented organizational goals, fostering the necessary collaboration both within and across organizational boundaries to achieve results, and demonstrating a commitment to lead and facilitate change. These expectations for senior executives will be critical to keep agencies focused on transforming their cultures to be more results oriented, less hierarchical, and more integrated, and thereby be better positioned to respond to emerging internal and external challenges, improve their performance, and assure their accountability. We provided a draft of this report in August 2002 to the Secretaries of the Interior, Transportation, the Treasury, and Veterans Affairs and the Commissioner of Internal Revenue for their review. We received written comments from the Commissioner of Internal Revenue stating that our draft report accurately accounted for the factors that influence IRS’s executive performance management and compensation system (see app. VI). In addition, cognizant agency officials from the Departments of the Interior, Transportation, and Veterans Affairs responded that they generally agreed with the contents of the draft report. In some cases, they also provided technical comments to clarify specific points regarding the information presented. Where appropriate, we have made changes to this report that reflect these technical comments. We are sending copies of this report to the Secretaries of the Interior, Transportation, the Treasury, and Veterans Affairs; the Commissioner of Internal Revenue; and the Director of OPM. We will also make this report available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me or Lisa Shames on (202) 512-6806 or [email protected]. Janice Lichty and Bryan Rasmussen were key contributors to this report. To meet our objectives, we focused our review on federal agencies that have implemented a set of balanced expectations in their performance management systems for all or a significant portion of their senior executives prior to the Office of Personnel Management (OPM) amending the regulations. Based on research and interviews with knowledgeable officials, we identified agencies that had relevant experience in using a set of balanced expectations for senior executive performance management systems. Among the possible agencies with relevant experience, we selected the Bureau of Land Management (BLM), Federal Highway Administration (FHWA), Internal Revenue Service (IRS), and Veterans Benefits Administration (VBA) because they provided variation in mission, size, and organizational structures. To describe the sets of balanced expectations these agencies used to appraise senior executive performance, we collected and analyzed agencies’ strategic plans, annual performance plans, and performance reports; personnel policies and memoranda; survey instruments and analyses; and the individual performance plans and self-assessments of the senior executives we interviewed. We used the categories in OPM’s regulations to classify the agencies’ expectations for senior executive performance—organizational results, customer satisfaction, and employee perspectives. Based on our review of the agencies’ expectations, we identified and categorized the general approaches that agencies took to contribute to organizational results, customer satisfaction, and employee perspectives, as shown in tables 1, 2, and 3 and included a sample of expectations along these approaches. Our analysis and characterization for categorizing the performance expectations and examples of those expectations was independently reviewed and agreed upon for the three categories. To identify the initial implementation approaches these agencies have taken that may be helpful to other agencies as they manage senior executive performance against the balanced expectations, we interviewed senior executives in person or over the telephone at the four agencies. At BLM, FHWA, and VBA, we randomly selected 10 career senior executives to interview at each agency, including 5 executives randomly drawn from central headquarters and 5 executives randomly drawn from the field offices. At IRS, because of the larger number of senior executives, we randomly selected 21, or 10 percent, of the career senior executives to interview, including at least 5 executives randomly drawn from central headquarters and at least 5 executives randomly drawn from the field offices. The random selections covered two or more levels of the Executive Schedule for senior executives in each agency. This sample is representative of the senior executives at their respective agencies. We identified the examples described in this report through our interviews with senior executives and other agency officials. We did not independently verify the testimonial evidence from the interviews or the documents that senior executives and agency officials provided to us. We also did not attempt to assess the prevalence of the examples we cite among the senior executives within the same agency. Therefore, senior executives other than those cited for a particular example may, or may not, be engaged in the same actions. In addition, we spoke with the Commissioner of Internal Revenue, the former Under Secretary of Benefits for VBA, and the former Deputy Director for BLM to discuss their agencies’ experiences and challenges in implementing balanced expectations in their performance management systems. We interviewed agency officials responsible for managing human capital, implementing the Government Performance and Results Act (GPRA), and administering agencywide customer and employee satisfaction surveys, as well as other agency officials identified as having particular knowledge of balanced expectations and performance management in general. We spoke to OPM officials responsible for the senior executive performance management regulations to discuss the development and implementation of the regulations, as well as officials responsible for amending and implementing the general workforce performance management regulations. Lastly, we met with the President of the Senior Executives Association and other subject matter experts from the National Academy of Public Administration, Brookings Institution, and PricewaterhouseCoopers Endowment for The Business of Government. We performed our work in Washington, D.C. from October 2001 to July 2002 in accordance with generally accepted government auditing standards. BLM’s senior executive performance plans for the 2001 performance appraisal cycle from July 1, 2000, through June 30, 2001, are structured around four performance elements that correspond with BLM’s strategic goals. These performance elements and their fully successful performance standards include the following. Restore and maintain the health of the land: Understand and plan for the condition and use of the public lands by conducting assessments and completing land use plan evaluations; restore at-risk resources and maintain functioning systems, particularly riparian areas and watersheds; incorporate management land health standards into decisions and plans; implement the National Fire Plan; and emphasize resource protection by assuring that work commitments for monitoring and inspection are met, appropriate enforcement actions are taken, and results are recorded. Serve current and future publics: Ensure the National Environmental Policy Act and environmental analyses are sufficient to sustain program decisions; reduce threats to public health, safety, and property by completing deferred maintenance projects; continue action on energy and mineral leases, permits, and claims; implement BLM’s wild horse and burro national strategy in accordance with program directives; and improve land, resource, and title information by participating in the development and implementation of bureauwide data standards. Improve organizational effectiveness: Continue to improve customer service through timely and enhanced consultation, cooperation, and communication with government officials and others to build consensus; review public comment cards and survey results to determine where improvements can be made; expand partnerships to implement on-the- ground activities; implement the service-first concept and improve overall services; and improve program accountability and performance by staying within the organizational cost targets and assuring the accuracy of cost data, conducting the work aligned with cost targets, and improving work processes and internal management practices based on analyses of management and evaluation data, such as activity-based cost data. Improve human resources management and quality of worklife: Develop a strategy to provide for a needed workforce by developing and implementing a response to the workforce plan; maintain a trained and motivated workforce by implementing plans and strategies to improve the satisfaction of BLM employees by assuring each employee has a current position description and individual performance plan linked to the strategic plan, and providing appropriate training for employees at all levels; demonstrate improvement in diversity and composition of the workforce as measured by the percent of hiring opportunities in which diversity candidates are placed; demonstrate commitment to nondiscrimination in the workplace by ensuring that individuals are not denied employment or career advancement opportunities due to gender, race, and other factors; and provide development opportunities to subordinates to help them participate in the goal of achieving workforce diversity. BLM included the fully successful performance standards for each of the performance elements in the executive’s individual performance plans, described above. Executives receive a rating of “pass” if they meet the fully successful standard for an element. Executives could also receive a rating of “provisional” or “fail” for each element. Executives receive a summary rating of “pass” if they fulfill the fully successful standards for all the performance elements in their performance plans. Executives could also receive a summary rating of “provisional” or “fail.” According to BLM officials, BLM is planning to revise the performance elements in its senior executive performance plans for the 2002 performance appraisal cycle to reflect the priorities of BLM and the Department of the Interior. The elements include GPRA, key management objectives, the President’s Management Agenda, and 4Cs philosophy (consultation, cooperation, communication, all in the service of conservation). Each performance element will include a fully successful performance standard. The performance elements and standards include the following. GPRA— (1) Restore and maintain the health of the land by conducting assessments and completing land use plan actions as planned, (2) serve current and future publics by ensuring the National Environmental Policy Act and environmental analyses are sufficient to sustain program decisions implementing the President’s Energy Plan while assuring that the National Environmental Policy Act and planning guidelines are met, and (3) implementing BLM’s wild horse and burro national strategy. Key management objectives—Implement the Director’s priorities by (1) assisting in the development of options to establish conservation reserves, (2) improving the productivity and diversity of public lands, (3) executing the National Fire Plan, (4) developing opportunities for alternative sources of energy in land use planning and program implementation, (5) completing new or revised land use plans as proposed in congressional justifications, and (6) achieving targets for abandoned mine lands/herd management areas consistent with the revised wild horse and burro strategy and BLM’s annual performance plan. President’s Management Agenda—Improve financial management, improve performance and budget integration, implement e-government, make progress in the strategic use of human capital, and develop and implement BLM’s competitive sourcing plan. Specific ways to address these areas were included. 4Cs philosophy—Demonstrate innovative approaches to implementing the Secretary’s 4Cs so that those impacted by BLM decisions are considered and their concerns addressed; and demonstrate personal leadership through significant contributions to achieving the organization’s goals, positioning the organization for the future, through complex situations and working with others. FHWA’s senior executive performance plans for fiscal year 2001 consist of performance objectives that senior executives work to achieve during the year. FHWA requires its senior executives to set critical and noncritical performance objectives that are tailored to their responsibilities within their respective offices and aligned with the FHWA Administrator’s performance agreement with the Secretary of Transportation. These objectives are to contribute to FHWA’s corporate management strategies, which are based on the Malcolm Baldridge and the Presidential Quality Award criteria. These criteria include the following. Leadership—Strengthen FHWA’s Leadership System, through training and other developmental initiatives, for the agency’s new organizational culture; set the vision and direction, ensure accountability, and provide the resources to deliver the products and services to the customers in an excellent and timely manner. Strategic planning—Translate strategies into unit, division, team, and individual action plans with performance measures based on the strategic objectives and performance goals. Customer and partner focus—Identify customer and partner needs and measure their level of satisfaction; achieve success through extensive cooperation and partnering with state and local transportation agencies; receive and act upon feedback from customers and use that information to improve products and services to ensure customer and partner needs are met. Information and analysis—Identify and develop key business information systems that meet and track the Department of Transportation and FHWA strategic goals; create an environment in which knowledge, as a key asset of the agency, is managed, shared, and used effectively. Human resource development and management—Increase employee technical competence, authority, and the tools needed to meet agency and customer needs; continue to develop and utilize the full potential of the agency’s human resources and create an environment that is conducive to performance excellence and personal and organizational growth. Process management—Design, manage, and improve key processes to achieve better results; use customer- and employee-focused support, service, and delivery processes to continually improve performance and enhance products and services. Business results—Develop critical FHWA business metrics to measure the overall quality of processes and services and report results; use customer feedback and benchmark high-performance organizations to continuously improve overall performance for the customers. FHWA appraises senior executives on their achievement towards each critical and noncritical performance objective. Initial assessment ratings: For each performance objective in their individual performance plan, senior executives receive an assessment of “achieved results,” “minimally satisfactory,” or “unsatisfactory.” Achieved results—Performance that fully meets, exceeds, or demonstrates sufficient progress toward the attainment of the objective as defined by the performance targets. Minimally satisfactory—Performance that only partially meets or only partially demonstrates sufficient progress toward the attainment of the objective as defined by the performance targets. Unsatisfactory—Performance that fails to meet or demonstrate sufficient progress toward attainment of the objective as defined by the performance targets. FHWA appraises senior executives on their achievement towards all the performance objectives in their individual plans. Summary ratings: Senior executives receive a summary rating on the achievement of their performance objectives. The summary rating levels include “achieved results,” “minimally satisfactory,” and “unsatisfactory.” Achieved results—All critical objectives must be assessed achieved results. No more than one noncritical objective can be assessed minimally satisfactory and none can be assessed unsatisfactory. Minimally satisfactory—One or more critical objectives or two or more noncritical objectives assessed minimally satisfactory, or one or more noncritical objectives assessed unsatisfactory. Unsatisfactory—Unsatisfactory assessment on any critical objective. IRS’s senior executive performance plans for fiscal year 2001 are structured around responsibilities, commitments, and a retention standard. Responsibilities: The responsibilities reflect the core values of IRS that are shared by all executives and managers for achieving performance excellence. The responsibilities are structured around (1) leadership, (2) employee satisfaction, (3) customer satisfaction, (4) business results, and (5) equal employment opportunity. Leadership—Successfully leads organizational change, effectively communicates the mission and strategic goals to employees and other stakeholders, responds creatively to changing circumstances, and uses sound judgment to make effective and timely decisions. Employee satisfaction—Ensures that a healthy work environment is maintained, creates an environment for continuous learning and development opportunities, and effectively uses feedback and coaching to promote teamwork and skill sharing. Customer satisfaction—Listens to customers, analyzes their feedback to identify their needs and expectations, builds strong alliances, and involves stakeholders in making decisions and achieving solutions. Business results—Develops and executes plans to achieve organizational goals, leverages resources to maximize efficiency and produce high quality results, and learns about current and emerging issues in own field of expertise. Equal Employment Opportunity—Takes steps to implement equal employment opportunity; cooperates with equal employment opportunity officials on complaints; assigns work and makes employment decisions without regard to sex, race, color, national origin, and other factors; and monitors work environment to prevent instances of prohibited discrimination and/or harassment. Commitments: Executives are to identify commitments they will accomplish during the year that are based on the responsibilities. The commitments describe a limited number of critical actions; objectives, such as personal development objectives; and/or results that the executive will work to achieve. They are specific to each executive and should be derived from, and directly contribute to, the program priorities and objectives established by the organization’s annual business or operations plan. In addition, senior executives are to establish a principal commitment in their individual performance plans focused on the overall attainment of objectives to accomplish the operating division’s performance plan. Retention standard: IRS developed a performance standard relating to the fair and equitable treatment of taxpayers that senior executives must meet. The retention standard states: “Consistent with the individual’s official responsibilities, administers the tax laws fairly and equitably, protects taxpayers’ rights, and treats them ethically with honesty, integrity, and respect.” According to IRS, the executive and supervisor review the retention standard to ensure mutual understanding. IRS appraises senior executives on their achievement towards their responsibilities, commitments, and retention standard. Responsibilities: The executives receive a rating on how well they achieved their responsibilities during the year and the actions taken to support the accomplishment of the strategic goals and annual business plan. These ratings include the following. Exceeded—In addition to placing appropriate emphasis on the five sets of responsibilities, served as a role model in one or more of the five sets. Actions taken were exemplary in promoting accomplishment of the annual business plan and strategic goals. Met—Placed appropriate emphasis on each of the five sets of responsibilities. Appropriate actions were taken to support accomplishment of the annual business plan and strategic goals. Not met—Placed insufficient emphasis on one or more sets of responsibilities. Actions taken were inappropriate, ineffective, or undermined strategic goals or annual business plan accomplishment. Commitments: The executives receive a rating on how well they achieved the desired results outlined in their performance commitments. The ratings include the following. Exceeded—Overcame significant obstacles, such as insufficient resources, conflicting demands, or unusually short time frames, in achieving or exceeding desired results. Met—Achieved or made substantial progress toward achievement of desired results. Not met—Did not achieve or make substantial progress toward achievement of desired results. Retention standard: Executives are rated on whether they met or failed to meet their retention standard. Senior executives receive a summary evaluation, which combines the ratings they received for their responsibilities, commitments, and retention standard. Summary evaluation ratings include the following. Outstanding—The executive met the retention standard and performed as a model of excellence by exceeding the responsibilities and commitments in the individual performance plan, despite constantly changing priorities, insufficient or unanticipated resource shortages, and externally driven deadlines. The executive consistently demonstrated the highest level of integrity and performance in promoting the annual business plan and IRS’s strategic goals and objectives. The executive’s effectiveness and contributions had impact beyond his or her purview. Exceeded—The executive met the retention standard and generally exceeded both the responsibilities and commitments in the individual performance plan. However, the executive may have met the retention standard and demonstrated exceptional performance in either responsibilities or commitments and met the expectations of the other. The executive may have overcome significant organizational challenges, such as coordination with external stakeholders (e.g., the National Treasury Employees Union and the Congress) or insufficient resources. The executive’s effectiveness and contributions may have had impact beyond his or her purview. Met—The executive met the retention standard and the responsibilities and commitments in the individual performance plan with solid, dependable performance. The executive consistently demonstrated the ability to meet the requirements of the job. Challenges encountered and resolved are part of the day-to-day operation and are generally routine in nature. Not met—The executive failed to meet the retention standard, responsibilities, and/or commitments. Repeated observations of performance indicated negative consequences in key outcomes, such as quality, timeliness, and business results. Immediate improvement is essential. VBA’s performance plans for its senior executives in the regional offices for fiscal year 2001 are structured around common performance elements— service delivery, organizational support/teamwork, leadership development, external relations, and workplace responsibilities. Service delivery: The executive leads the regional office in the pursuit of outstanding performance in all applicable program areas, and as a team member helps the Service Delivery Network and VBA as a whole to improve performance. Appropriate emphasis is placed on the balanced scorecard and the executive’s performance against the balanced scorecard targets. The categories of the balanced scorecard include: customer satisfaction—organizational perspective from the viewpoint of the veterans, service delivery partners, and other stakeholders; accuracy—the quality of work performed; speed or timeliness—the length of time it takes to complete specific end products or work units; unit cost—costs associated with producing a service or a product; and employee development and satisfaction—the skill level of the workforce, training needs, course development, and satisfaction with the job and organization. Organizational support/teamwork: The executive regularly participates in activities and projects intended to further the goals of the Service Delivery Network and VBA as a whole while functioning as a dedicated and skillful team player. These activities typically require the contribution of local resources such as projects at the national level, special ad hoc efforts, and innovations. The executive is assigned to a certain number of projects during the year in light of the size of the executive’s regional office. Leadership development - executive competencies and qualifications: The executive identifies developmental activities in a proposed leadership development plan, which is to be submitted at the beginning of the performance year. The executive engages in substantial personal development activities such as attending training courses, reading books, and undertaking projects in order to develop skills. These activities focus on OPM’s Executive Core Qualifications including leading change, leading people, results driven, business acumen, and building coalitions and communications. External relations: The executive builds effective, productive relationships with organizations external to VBA in order to further the department’s goals and interests. Examples of activities include work on a Federal Executive Board project, participation in Veterans Integrated Service Network meetings, and relations with the media, congressional offices, and service organizations. Workplace responsibilities: The executive assures a high quality of work life for all employees of the regional office by: promoting and maintaining an effective labor-management relations program that incorporates the principles of partnership; creating and maintaining a working environment that is free of discrimination and one that assures diversity in the workplace; ensuring that plans exist and are adequately implemented to recruit, train, retain, motivate, empower, and advance employees and that they promote the needs and goals of the individual and the organizations; and providing a safe, healthy work environment. VBA identified indicators of performance for this element including performance management and recognition, employee development and training, equal employment opportunity policy statement, physical plant enhancements, and employee satisfaction surveys. Senior executives receive a level of achievement of “exceptional,” “fully successful,” or “less than fully successful” for each element in their individual performance plan as measured against the established performance requirements. For example, for organizational support and teamwork, the executive’s performance is acceptable if the rater determines that completion of projects and innovations is substantially equal to agreed-upon expectations and the executive demonstrates cooperation with other executives in the attainment of these goals where applicable. For elements where a level of achievement other than fully successful has been assigned, the rating official must describe the executive’s achievements on additional pages. Exceptional—Fully successful performance requirements for the element are being significantly surpassed. This level is reserved for employees whose performance in the element far exceeds normal expectations and results in major contributions to the organization. Fully successful—Performance requirements for the particular element when taken as a whole are being met. This level is a positive indication of employee performance and means that the employee is effectively meeting performance demands for this component of the job. Less than fully successful—A level of performance that does not meet the requirements established for the fully successful level. Assignment of this achievement level means that performance of the element is unacceptable. The senior executives receive a summary rating level of “outstanding,” “excellent,” “fully successful,” “minimally satisfactory,” or “unsatisfactory” performance based on the achievement levels assigned for each performance element. Outstanding—Achievement levels for all elements are designated as exceptional. Excellent—Achievement levels for all critical elements are designated as exceptional. Achievement levels for noncritical elements are designated as at least fully successful. Some, but not all, noncritical elements may be designated as exceptional. Fully successful—The achievement level for at least one critical element is designated as fully successful. Achievement levels for other critical and noncritical elements are designated as at least fully successful or higher. Minimally satisfactory—Achievement levels for all critical elements are designated as at least fully successful. Unsatisfactory—The achievement level(s) for one (or more) critical element(s) is (are) designated as less than fully successful. For fiscal year 2002, VBA revised its performance plans for the senior executives in the regional offices to improve individual accountability for performance elements by linking organizational performance goals and actual performance with meaningful and measurable performance elements. VBA outlined specific sub-elements for the service delivery element and replaced the leadership development element with two additional elements—program integrity and information security. These revisions include the following. Service delivery: This element focuses on the executive’s performance towards the balanced scorecard targets at the regional office and national levels, in addition to specific performance priorities with corresponding targets. Achieve monthly rating production goals—The executive will meet monthly rating production goals in either 9 out of 12 months or meet or exceed overall average monthly production goals. Improve the timeliness of rating end products completed—The executive will meet the average days of completion for specific end products and improve a specified percentage based on his or her office’s performance relative to the national performance. Also, the executive will improve the cycle times of claims processing in development, rating, and authorization time as shown in the Claims Automated Processing System records. In addition to reducing the cycle time, the executive will establish 70 percent of his or her claims, after December 1, 2001,within 7 days. Reduce total compensation and pension cases pending over 6 months— The executive will improve a specified percentage based on the percentage of fiscal year 2001 cases pending over 6 months. For example, if an executive’s office has over 50 percent of compensation and pension cases pending over 6 months as of the end of fiscal year 2001, the executive will achieve a 5 percent improvement by the end of the 2002 rating year. Reduce the pending inventory of compensation and pension claims— The executive will reduce the number of rating and authorization cases pending by set targets for each office. Meeting these targets will reduce VBA’s inventory of rating-related cases to a total of 315,586 cases and reduce VBA’s authorization cases by at least 20 percent by the end of the rating period. Reduce inventory of appeals and achieve improvement in remand timeliness—The executive will reduce the total number of pending appeals by 10 percent and will achieve a 10 percent improvement in the average number of days a remand is pending. Achieve established balanced scorecard targets—The executive’s performance on this element will be determined by comparing the regional office’s performance towards the regional office scorecard targets (weighted 80 percent) and the office’s contribution to VBA’s national scorecard targets (weighted 20 percent). The executive must achieve a minimum level of 90 percent of the composite target. Service delivery network resource center and regional processing organization functions—Service delivery network resource center executives are required to meet specific monthly production targets either in 9 of 12 months or meet or exceed the overall average of monthly production goals. Regional processing organization directors will have an additional standard provided at a later date. Additional priorities as established by the Secretary for Veterans Affairs will also be used to evaluate performance in this element. Program integrity: The executive will lead his or her regional office to ensure compliance with VBA’s program integrity directives. The executive is responsible for ensuring that program integrity initiatives and policies are implemented, assessed through an effective internal control process, and adjusted as necessary to achieve appropriate results. The executive will accomplish this by adhering to VBA’s program integrity directives and the Inspector General recommendations that are applicable and ensuring that on-site reviews do not reveal critical flaws in oversight of program integrity issues. Information security: The executive must exercise due diligence in efforts to plan, develop, coordinate, and implement effective information security procedures as identified by the Office of Management and Budget, the National Institute of Standards and Technology, Veterans Affairs’ policies, and VBA guidance and policy documents. The executive will have met this element by ensuring that information system security plans exist and are implemented in accordance with the National Institute of Standards and Technology and Office of Management and Budget guidelines; ensuring that annual risk assessments are conducted for each identified information security—applications, hardware, software—to ensure that the identified risks, vulnerabilities, and threats are addressed by appropriate security controls; and ensuring that all employees comply with departmental training requirements to understand their information security responsibilities. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
Effective performance management systems link individual performance to organizational goals. In October 2000, the Office of Personnel Management amended regulations to require agencies to link senior executive performance with organizational goals; to appraise executive performance by balancing organizational results with customer satisfaction, employee perspective, and other areas; and to use performance results as a basis for pay, awards, and other personnel decisions. Agencies were to establish these performance management systems by their 2001 senior executive performance appraisal cycles. Because they implemented a set of balanced expectations prior to the Office of Personnel Management requirement, GAO studied the Bureau of Land Management's, Federal Highway Administration's, Internal Revenue Service's, and Veterans Benefits Administration's use of balanced expectations to manage senior executive performance in order to identify initial approaches that may be helpful to other agencies in holding senior executives accountable for results. The agencies GAO reviewed developed an initial set of balanced expectations for senior executives to address in their individual performance plans. GAO found that these agencies are in the early stages of using a set of balanced expectations to appraise senior executive performance and there are significant opportunities to strengthen their efforts as they move forward in holding executives accountable for results. Specifically, more progress is needed in explicitly linking executive expectations for performance to organizational goals. In addition, while these agencies address partnering with customers and other stakeholders, greater emphasis should be placed in fostering the collaboration within and across organizational boundaries to achieve results. Successful organizations understand that they must often change their culture to successfully transform themselves, and such change starts with top leadership. Senior executive performance expectations to lead and facilitate change could be a critical element as agencies transform themselves.
A number of entities are involved in the supply chain. These entities include the following: Importers: Bring articles of trade from a foreign source into a domestic market. Importers are responsible for providing ISF data, but an importer may designate an authorized agent to file the ISF on the importer’s behalf. Carriers: Transport goods from a foreign port to a U.S. port. For foreign cargo remaining on board, the carrier is considered the importer and is required to submit the ISF for the shipment. Licensed customs brokers: Clear goods through customs by preparing and filing proper entry forms, advising importers on duties to be paid, and arranging for delivery of imported goods to the destination. They also may act as the designated agent for importers in filing their ISFs. Shippers: Supply or own the commodities that are being shipped. Freight consolidators: Accept partial container shipments from individual shippers and combine the shipments into a single container for delivery to the carrier. Non-vessel operating common carriers: Buy shipping space on a vessel, through a special arrangement with an ocean carrier, and resell the space to individual shippers. Supply chain entities may participate in CBP’s Customs-Trade Partnership Against Terrorism (C-TPAT), a voluntary program designed to improve the security of the international supply chain while maintaining an efficient flow of goods. Under C-TPAT, CBP officials work in partnership with private companies to review their supply chain security plans to improve members’ overall security. In return for committing to making improvements to the security of their shipments by joining the program, C-TPAT members may receive benefits, such as reduced numbers of inspections or shorter border wait times for their shipments. Within 1 year of a member’s initial certification into the program, CBP is to conduct a validation to ensure that the security measures outlined in the certified members’ security profiles and periodic self-assessments are reliable, accurate, and effective. As of July 8, 2010, 4,416 importers were members of C-TPAT. In June 2004, CBP launched the Advance Trade Data Initiative with the goal of identifying information about shipments in advance of their arrival in the United States for improving the targeting of containers that could be used by terrorists to transport dangerous cargo. In the process of identifying such information for the Advance Trade Data Initiative, CBP consulted with its Trade Support Network in 2005 and formed a Cargo Targeting Task Force in March 2006 to review the initiative and to make recommendations for improving targeting of high-risk oceangoing cargo. Figure 1 shows a portion of the millions of cargo containers that are shipped to the United States each year that CBP is to screen for potential threats. In October 2006, the SAFE Port Act was enacted, which requires CBP to collect additional data related to the movement of cargo through the international supply chain and analyze these data to identify high-risk cargo for inspection prior to cargo loading at foreign seaports. The additional data elements were to include appropriate elements of customs entry data as determined by the Secretary of Homeland Security. The SAFE Port Act requires CBP to adhere to the parameters in section 343(a) of the Trade Act of 2002, including provisions requiring consultation with a broad range of affected trade industry entities and restricting the use of information to security purposes, in developing the regulation. In 2007, CBP distributed to trade industry groups a Proposal for Advance Trade Data Elements, which proposed the data elements that later became known as the 10+2 data elements, and posted the proposal on its Web site with a request for comments from the public. In January 2008, CBP published a notice of proposed rulemaking and, in November 2008, CBP issued its interim final rule to require the submission of these additional data elements. The interim final rule went into effect on January 26, 2009, and provided a 1-year flexible enforcement period. Importers are responsible for submitting data elements for the ISF, and the required data elements differ depending on the cargo’s destination. For cargo containers that are bound for the United States as the final destination, the rule requires importers to submit 10 data elements to CBP 24 hours prior to loading. Four of these 10 data elements are identical to elements submitted later for customs entry purposes. For cargo containers that are transiting the United States but for which the United States is not the final destination, the rule requires importers to submit 5 data elements to CBP prior to loading. (See table 1 for the required ISF data elements.) In addition to data already provided by carriers under the 24-hour rule, which requires that carriers submit cargo manifest information—a list of cargo carried in a container—to CBP 24 hours before U.S.-bound cargo is loaded onto a vessel at a foreign port, carriers are required to provide the Additional Carrier Requirements, which include the following two data elements: Vessel stow plan: No later than 48 hours after departure from the last foreign port, carriers must submit information to include the vessel operator, voyage number, the stow position of each container, hazardous material code (if applicable), and the port of discharge. For a voyage of less than 24 hours (short haul), CBP requires that the stow plan be provided any time prior to arrival at the first U.S. port. For an example of a vessel stow plan see figure 2. Container status messages (CSM): CSMs report terminal container movements, such as loading and discharging the vessel, and report the change in the status of containers, such as if they are empty or full. CSMs also report conveyance movements, such as vessel arrivals and departures. Carriers must supply CSMs daily for certain events relating to all containers laden with cargo destined to arrive within the limits of a port in the United States by vessel. For U.S.-bound cargo, the interim final rule provides two types of flexibilities with respect to certain data elements (see table 2). These flexibilities do not apply to the ISF filings for in-transit cargo. The purpose of the flexibilities is to allow CBP to conduct a review of the data elements, including an evaluation of any specific compliance difficulties that the trade industry may be encountering with respect to these data elements. In order to ensure that importers always provide CBP with the most updated and accurate 10+2 data, CBP allows importers to alter their ISF submissions through an amendment process that is not related to the flexibilities. Under this standard amendment process, the importer is obligated to provide an amended ISF as soon as better information is discovered or if there are changes to the shipment—for example, if merchandise is sold in transit—up until vessel arrival in the first U.S. port. Using this standard amendment option, importers can amend any data element included in an ISF submission, regardless of whether the flexibilities were used, before a shipment’s arrival at a U.S. port. In addition, CBP allows for these standard amendments to be provided after vessel arrival at the first U.S. port even though the importer is not generally obligated to make amendments to the ISF when better information or changes to the shipment occur after vessel arrival in the first U.S. port. The collection of these additional 10+2 data elements is intended to improve high-risk targeting efforts. ATS incorporates two types of targeting rules—tactical and strategic—to identify risk factors in shipment data. Tactical rules: Rules that identify risks posed by specific intelligence or threats. Tactical rules typically identify threats based on the specific entries for one or more shipment data elements. Tactical rules are updated with minimal delay to react to the immediate and specific nature of the intelligence. Strategic rules: Rules that identify more generalized intelligence or threats or that identify relationships between different data elements within a single record or across multiple records. The process to update strategic rules involves iterations of testing to ensure that rules have their intended effect. Within ATS, CBP develops combinations, or sets, of these two types of rules and assigns numerical weights to the rules in a set to determine overall risk scores for particular threats, such as narcotics trafficking or national security threats. CBP uses one such weighted rule set—the national security weighted rule set—as targeting criteria to assess the national security risk posed by maritime cargo. TECS—a set of tactical rules that compares 10+2 data to known violators of federal law— contributes, along with other strategic and tactical rule sets, to risk scores generated by the national security weighted rule set. Based on these risk scores, as well as CBP targeters’ analyses of shipment data, CBP is to take actions to mitigate the threats. CBP assesses the risks posed by shipments repeatedly throughout the transit process. CBP reviews shipment records prior to the loading of the cargo onto a U.S.-bound vessel, as well as during shipment transit, to identify potential threats and determine if additional action, such as cargo inspection, is required. When shipment record data elements are updated with additional or amended information, ATS could identify new risks or mitigate previously identified risks. Therefore, a shipment’s overall risk score could change while the shipment is in transit. Regulatory agencies have authority and responsibility for developing and issuing regulations. The basic process by which all federal agencies develop and issue regulations is spelled out in the Administrative Procedure Act. Among other things, the act generally requires agencies to publish a notice of proposed rulemaking in the Federal Register. After giving interested persons or entities an opportunity to comment on the proposed rule by providing “written data, views, or arguments,” the agency may then publish the final rule. OMB is responsible for the coordinated review of agency rulemaking to ensure that regulations are consistent with applicable law, the President’s priorities, and the principles set forth in executive orders, and that decisions made by one agency do not conflict with the policies or actions taken or planned by another agency. Under Executive Order 12866, executive branch agencies must conduct a regulatory analysis for economically significant regulations (generally those rules that have an annual effect on the economy of $100 million or more). OMB also provides guidance to agencies on regulatory requirements, such as OMB Circular No. A-4, which provides analytical guidelines for agencies to use in assessing the regulatory impact of economically significant regulations. Circular No. A-4 is designed to assist analysts in regulatory agencies by defining good regulatory analysis and standardizing the way benefits and costs of federal regulatory actions are measured and reported. CBP published its Regulatory Assessment and Final Regulatory Flexibility Analysis, referred to in this report as CBP’s regulatory assessment, as part of the rulemaking process for the 10+2 rule. This assessment was prepared for CBP by an outside contractor. CBP conducted this assessment to address (1) the requirement to conduct a regulatory analysis for economically significant actions; and (2) the SAFE Port Act of 2006, which requires DHS to consider the cost, benefit, and feasibility of the rule. CBP published its initial regulatory assessment in December 2007 and a revised regulatory assessment in November 2008, which is discussed later in this report. The regulatory assessment contains a “break-even” analysis that determines how many times a West Coast port shutdown, a nuclear attack, or a biological attack would need to be prevented through use of the data in order for the benefits to equal the costs. For example, the regulatory assessment concludes that the benefits would exceed the costs of the rule if one West Coast port shutdown due to a terrorist attack were prevented over a period of 3 months to 2 years, assuming that the rule only reduces the risk of a single such event. Although the effective date of the 10+2 rule was January 26, 2009, the rule allowed for a 1-year flexible enforcement period. Since the end of the flexible enforcement period, CBP has stated that it has been applying a “measured, common sense approach” to enforcement, which includes exercising the least punitive measures necessary to obtain full compliance, evaluating noncompliance on a case-by-case basis, and continuing to provide outreach and guidance to trade industry entities. During the enforcement period, which began January 26, 2010, CBP plans to first focus on importers who have not filed ISFs for shipments by issuing warning letters and possibly subjecting some of these shipments to nonintrusive inspections, such as taking x-ray images of cargo containers. Data from the ISFs must be matched to other data sources to determine compliance, including whether each required shipment has an ISF on file and whether the ISF was filed in a timely manner. ISFs are matched to manifest data using the bill of lading number—an alphanumeric code issued by a carrier that references an individual cargo shipment in a manifest—and then the matched ISF becomes part of a shipment record that includes manifest information and the International Maritime Organization number of the vessel on which the cargo is shipped. Using this vessel number, the shipment can be matched to a vessel departure message, which carriers are required to supply to CBP. CBP also matches the ISF and manifest information to customs entry data. CBP’s regulatory assessment generally adheres to OMB guidance by including required elements—such as a statement of the need for the proposed action, an examination of alternative approaches, and evaluation of the benefits and costs. However, the regulatory assessment lacks transparency regarding the selection of alternatives for analysis and support for the selection of the preferred alternative. Greater transparency on this topic could have improved CBP’s regulatory assessment. Additionally, a more complete analysis of the uncertainty involved in estimating key variables used to evaluate costs and benefits, and additional information regarding some costs to foreign entities, could also have improved CBP’s regulatory assessment. CBP’s regulatory assessment addresses the three basic elements of a good regulatory assessment, as defined by OMB: Statement of the need for the proposed action: The assessment includes a statement that the regulation was based on a statutory requirement (SAFE Port Act, Section 203(b)). Examination of alternative approaches: The assessment presents four alternatives for analysis. Each of the four alternatives has different components, and table 3 outlines the requirements of each alternative analyzed in the regulatory assessment. For example, alternative 1 requires importers to submit an ISF (bulk cargo shipments are exempt from the requirement) and carriers to submit the Additional Carrier Requirements. Evaluation of the benefits and costs: In accordance with OMB guidance, because the benefits could not be quantified, the assessment includes a “break-even” analysis. For example, the analysis concludes that the benefits of the rule would equal the costs if the rule avoids a nuclear attack once in 60 to 500 years, assuming that the rule only reduces the risk of a single such event. Additionally, the regulatory assessment is generally transparent in citing sources and explaining how estimates were derived. Where the analysis relied on third-party data sources, the regulatory assessment provides references to those data sources. For example, third-party sources are cited for estimates regarding the cost to importers for each day of delay and the costs associated with a potential terrorist attack. The regulatory assessment also provides explanations for how some of the estimates used in the assessment were developed. For example, the assessment explains that the initial one-time costs to adjust business practices to implement the 10+2 rule were based on information from a COAC survey and the recurring costs for transmitting ISF data were based on interviews with representatives from the shipping, importing, and customs brokerage industries. The regulatory assessment also contains supporting documentation and analysis, including an uncertainty and sensitivity analysis, as called for by OMB guidance. The assessment addresses limitations and key sources of uncertainty for each of three sections of the analysis that produced estimates: (1) the baseline shipping analysis, which estimates shipping trends (such as number of importers, carriers, and U.S.-bound shipments) in absence of the rule; (2) incremental costs (such as up-front costs per importer to adapt to the rule and costs per filing) and economic impact (such as losses from potential delays); and (3) potential benefits (such as the costs avoided by preventing a terrorist attack). It also includes an uncertainty analysis for the industry’s total estimated costs and welfare losses. The sensitivity analysis analyzes the effects of variables’ uncertainty on the results of the analysis and concludes that the uncertainty associated with the initial, up-front costs to importers has the greatest effect on the results of the analysis. As a result of the sensitivity and uncertainty analysis, the assessment concludes that the likelihood of reaching the higher end of the cost range is low. Our review of the regulatory assessment found that CBP was not transparent regarding how it selected the four alternatives for analysis. According to CBP officials, CBP selected the alternatives that the contractor analyzed in the regulatory assessment. However, based on our review, there is little variation in the alternatives analyzed. Each of the alternatives is a combination of including or excluding three components—the 10 ISF data elements, an exemption for bulk cargo shipments from filing the ISF, and the two data elements for the Additional Carrier Requirements—and the regulatory assessment does not discuss whether other alternatives may have met the requirements of the SAFE Port Act. Moreover, the regulatory assessment does not discuss other potential alternatives with additional or fewer data elements or why such other alternatives were not included in the analysis. For example, it does not discuss a range of other alternatives, such as requiring 15 data elements for the ISF or only one of the two carrier data elements. According to CBP officials, the regulatory assessment does not discuss other alternatives because CBP identified the current 10+2 data elements—in consultation with trade industry stakeholders, as data elements that would significantly increase CBP’s ability to make better informed targeting decisions—prior to the SAFE Port Act requirement to collect such data. Greater transparency regarding the selection of alternatives could have improved the assessment by justifying the limited scope of the alternatives analyzed in the regulatory assessment and providing insight into CBP’s decision making. According to OMB guidance, regulatory analysis provides a way of organizing the evidence on the key effects—good and bad—of the various alternatives that should be considered in developing regulations. The motivation is either to learn if the benefits of an action are likely to justify the costs or to discover which of various possible alternatives would be the most cost-effective. According to OMB guidance, a good regulatory analysis is designed to inform the public and other parts of the government (as well as the agency conducting the analysis) of the effects of alternative actions. Including a discussion of the full scope of feasible regulations could have enhanced transparency about the regulatory assessment’s usefulness for informing decision making. In response to our findings, CBP officials acknowledged that more information about the 10+2 rulemaking process, specifically the selection of the 10+2 data elements, could be added in a future update to the regulatory assessment, if an update is published, to provide greater context about the decision making involved in developing the 10+2 rule. The regulatory assessment also lacks transparency regarding the final selection of alternative 1 as the preferred alternative. OMB guidance states that regulatory analyses should be transparent, and in particular that such analyses should clearly explain the assumptions used in the analysis. Three of the alternatives (alternatives 1, 2, and 3) have almost identical costs and, therefore, the number of events (terrorist attacks) that would need to be avoided to justify the costs are almost identical. Absent supporting documentation, it is not clear why, based on the information and analysis in the regulatory assessment, CBP selected alternative 1 over the other alternatives. For example, the assessment does not explain how alternative 1 may be more likely to achieve benefits, specifically prevention of terrorist attacks, than the other alternatives to justify the selection of alternative 1. The assessment states that alternative 1 was favored over alternative 2 because the impact of requiring the ISF for bulk cargo—alternative 1 exempts bulk cargo from the ISF requirement, while alternative 2 requires an ISF for all cargo—is expected to be slight given that the number of bulk cargo shipments is small compared to the number of nonbulk shipments. Furthermore, according to CBP officials, the exemption for bulk cargo was selected to mirror the requirements of the 24-hour rule—which requires that carriers submit cargo manifest information for containerized cargo but allows certain timing exemptions for bulk cargo submissions. The assessment states that alternatives 3 and 4 were rejected based on CBP’s judgment that the ISF and Additional Carrier Requirements should work in tandem to be effective. However, the regulatory assessment does not describe or analyze how or why CBP made this judgment. For example, it does not describe how the ISF and Additional Carrier Requirements are used jointly to target for risk to support the requirement to provide both types of data to CBP. In response to our findings, CBP officials acknowledged that more information could be added to the regulatory assessment to provide greater transparency on this topic. The regulatory assessment acknowledged uncertainty for the cost to importers for a day of delay and the value of statistical life, but these variables were not addressed by the assessment’s uncertainty analysis. OMB guidance states that the important uncertainties connected with regulatory decisions need to be analyzed and presented as part of the overall regulatory analysis. Uncertainties with respect to these two variables may have influenced the results of the assessment. For example, if the assessment’s estimate for the value of statistical life was too low, the resulting conclusion would be that more terrorist attacks using cargo containers would need to be prevented in a particular time period to justify the costs of the regulation and the analysis would favor a less costly alternative. The quantitative uncertainty analysis includes the number or percentage of containers that may experience delays and the length of the potential delays in the supply chain, but the assessment does not address the impact of the uncertainty associated with the estimate for the dollar cost of delay. A more complete analysis of these variables’ uncertainty could have more fully addressed the elements in OMB guidance and, therefore, could have improved the regulatory assessment. CBP officials recognized that these estimates were not addressed in the uncertainty analysis and they acknowledged that more information could be added to improve the assessment’s discussion of uncertainty. The regulatory assessment notes that lost producer surplus, or profits, which were assumed to be borne by foreign entities, are not estimated in the assessment. OMB Circular No. A-4 states that, when evaluating a regulation that is likely to have effects beyond the United States, the effects to foreign entities should be reported separately. Because the assessment does not evaluate lost producer surplus, the costs to foreign entities are not fully reported. According to the regulatory assessment, these costs are not addressed because the regulatory assessment focuses on impacts to the U.S. economy. CBP officials acknowledged that, to the extent data are available on these costs, this information could be added to the regulatory assessment. These officials also said that CBP is conducting additional analyses to determine the impact of delays resulting from the rule and to review public comments solicited in the publication of the 10+2 rule. Depending on the results of these analyses, CBP may update its regulatory assessment. If CBP publishes an update to its regulatory assessment, additional information, such as a discussion of how the alternatives were selected for analysis, an uncertainty analysis for the cost to importers for a day of delay and for the value of statistical life, and estimates for lost profits borne by foreign entities, would improve the transparency and completeness of the assessment. CBP has collected and assessed a variety of information, such as daily compliance reports, and has shared information with the trade industry, through importer progress reports and outreach events, to help improve compliance with and implementation of the 10+2 rule. CBP is also using information it has collected to monitor and help improve implementation of the rule, for example, by posting a “Frequently Asked Questions” document on its Web site that addresses some common problems. CBP is tracking the daily level of ISF compliance at each U.S. port to determine the overall level of compliance with the 10+2 rule. For all shipments scheduled to arrive at a U.S. port within 2 days, CBP assesses the percentage of shipments that have ISFs filed. For example, for shipments scheduled to arrive in the United States on April 18, 2010, a report generated on the morning of April 16, 2010, indicated that 22,310 of 26,348 shipments (85 percent) had ISFs filed. CBP also monitors data on arriving vessels that have not submitted vessel stow plans, based on data for ships that are due to arrive in port within 96 hours. CBP forwards these reports to the local port officials who then contact the carriers who have not filed stow plans to obtain the necessary information. CBP has also collected and analyzed information about the use of flexibilities in filing ISFs. (Information on importers’ use of flexibilities is discussed later in this report.) To gauge issues trade industry entities may have in understanding or implementing the ISF requirement, CBP has also reviewed and analyzed data on ISF errors and rejections, including determining the most common errors that result in rejections. According to a CBP analysis, which examined ISFs submitted from January 26, 2010, through March 28, 2010, 22,257 of 81,435 rejected ISFs (27 percent) were rejected because they were duplicates of ISFs already on file. This was the most common error that led to rejections. Other types of errors, such as not supplying the ISF importer number, each accounted for less than 5 percent of rejected ISFs. While the data that CBP has collected to date provide information on the most common errors or reasons for rejecting ISFs for importers who are trying to comply with the rule, the data provide limited information about the reasons for noncompliance among other importers. According to CBP officials, CBP can identify a shipment for which an ISF has been filed, but it is difficult to determine the importer responsible for filing the ISF and possible reasons for why an ISF was not filed or was not matched to the shipment. For the purposes of filing the ISF, the importer for a shipment may be one of several entities involved in the supply chain, such as the owner or purchaser of the goods, and it is left to the various supply chain entities involved with a shipment to determine who will be responsible for filing the ISF. Furthermore, a shipment lacking an ISF may appear to be noncompliant if the importer makes an error in submitting the ISF, such as providing the incorrect bill of lading number. In April 2010, CBP began sending letters to importers who appeared to be noncompliant, based on CBP’s review of data collected from ISFs and other data such as customs entries, to notify these importers of possible noncompliance and encourage them to contact CBP about any concerns they may have. CBP officials said that they have been pleased with importers’ responsiveness to these letters. CBP is providing compliance and implementation data—specifically data on the number of ISFs that were (1) accepted, (2) rejected, and (3) timely—for each importer’s filings in the form of monthly progress reports sent to the importer’s filer or directly to the importer in the case of validated C-TPAT members. According to CBP officials, providing the information directly to the importer requires a manual process to set up accounts for individual importers and, therefore, this service is only offered to validated C-TPAT members as a benefit of participating in the program. For other importers, filers can register to receive a monthly progress report with the data for each importer they represent. The data in the progress reports for the other importers are aggregated for each month. For example, a progress report will indicate the number of rejected ISFs for an importer, but it does not provide transaction-level data, such as which particular ISFs were rejected. Although CBP officials recognize some importers’ concerns that progress reports lack transaction-level data for importers who are not validated C-TPAT members, they said that CBP has no plans to include transaction-level data for progress reports other than for validated C-TPAT member importers. According to some importers we interviewed, the lack of transaction-level data may make it difficult for an importer to identify causes of discrepancies between its own internal data and the information presented in CBP’s progress report. However, according to CBP officials, importers or their filers already receive information for each transaction, such as messages regarding errors in the ISF or confirmation that the bill of lading number was matched to other data. In addition to the monthly progress reports, CBP has also conducted outreach sessions for members of the trade industry and has received and responded to questions and comments about the 10+2 rule. From December 2008 through December 2009, CBP sponsored 35 town hall events across the country and has conducted additional outreach sessions through trade industry associations. In April and May 2010, CBP conducted Web-based seminars targeted to reach and inform small and medium importers. CBP also responds to questions and comments from the trade industry it receives through a dedicated e-mail address as well as phone calls and e-mails to program officials and has posted a “Frequently Asked Questions” (FAQ) document on its Web site. However, some importers we interviewed expressed concern that, rather than publish its policies informally through its Web site and FAQ, CBP should publish its policies in a document that is legally binding, such as a notice in the Federal Register. In particular, one concern is that CBP has not legally obligated itself to treat its current proxy for measuring ISF timeliness (24 hours prior to vessel departure) as meeting the rule’s requirement of 24 hours prior to loading. According to CBP officials, the regulation must require the data to be submitted prior to loading because the SAFE Port Act establishes this aspect of the requirement. CBP officials said they recognize that there is no existing metric for measuring the time of loading and, therefore, CBP plans to use the proxy measure in enforcing the rule. CBP also solicited public comments regarding the flexibilities and final regulatory assessment. According to CBP officials, comments that were directly relevant to the flexibilities or the regulatory assessment will be taken into consideration in developing the final rule. CBP officials stated that some comments that were not relevant to the interim aspects of the rule were addressed in the FAQ. CBP officials said that CBP is generally satisfied with the status of ISF implementation, based on CBP data that indicate that approximately 80 percent of shipments in July 2010 were compliant with the ISF requirement. CBP officials noted that this measure of 80 percent compliance includes ISFs for U.S.-bound and in-transit cargo, and compliance rates for in-transit cargo are lower than for U.S.-bound cargo. In July 2010, 646,016 of 748,780 U.S.-bound shipments (approximately 86 percent) had submitted ISFs, whereas 20,811 of 84,170 in-transit shipments (approximately 24 percent) had submitted ISFs. CBP officials stated they have a goal of increasing compliance to about 95 percent by fall 2010. As a result, CBP is monitoring performance to identify areas to improve implementation and compliance. CBP has identified issues with the implementation of the ISF for in-transit cargo, such as lack of clarity regarding the party responsible for filing the ISF for two types of in-transit cargo (immediate exportation and transportation and exportation) shipments, and CBP officials said that they plan to revise the requirements through future rulemaking. To help correct problems CBP has identified through its monitoring of ISF data, CBP has conducted further outreach efforts to members of the trade industry. For example, after identifying duplicate ISFs as the most common error and reason for rejecting ISFs, CBP officials determined that some filers or importers were resubmitting ISFs if they had received a message from CBP that the ISF had not been matched to a bill of lading. In some cases, this was occurring because the ISF preceded submission of manifest information containing the matching bill of lading number. As a result, according to CBP officials, CBP has conducted outreach sessions through trade industry associations and posted information on its FAQ to reduce the number of such resubmissions. In April 2010, CBP also began identifying importers who may not be complying and sent letters to these importers notifying them of possible noncompliance and encouraging them to contact CBP about any concerns they may have. In general, representatives of the four industry associations we spoke with said they have been satisfied with CBP’s outreach efforts during implementation of the rule. In addition to its other outreach efforts, CBP is also working to address concerns regarding the information contained in ISF progress reports, specifically the number of ISFs that cannot be measured for timeliness. In order to determine if an ISF was submitted on time, CBP must match the ISF to the vessel departure message supplied by carriers. According to CBP data, about 50 percent of the ISFs it analyzed for the May 2010 progress reports could not be assessed for timeliness because they could not be matched to vessel departure messages. For example, an ISF may not be matched to a vessel departure message if the bill of lading number on the ISF does not match a bill of lading associated with cargo on a vessel for which CBP has received a departure message. In order to improve the number of ISFs that can be matched to vessel departure messages, CBP is making adjustments to the system used by importers and their filers to submit ISFs, the Automated Broker Interface, to allow filers to query bill of lading numbers in the system. According to CBP officials, this will enable importers and filers to ensure that the bill of lading information is correct before submitting an ISF. Under the 10+2 rule, importers are required to submit complete and accurate information, and fewer errors in the bill of lading information will improve CBP’s ability to match ISF data to other data sources and monitor compliance. CBP officials also said that CBP does not make enforcement decisions based on the information in the progress reports. With respect to carriers’ implementation, CBP officials said that while there have not been many instances of major carrier companies failing to submit vessel stow plans, some smaller companies may have had trouble adapting to the requirement because they had not previously maintained vessel stow plan information. According to CBP officials, CBP has developed a spreadsheet format that smaller carriers can use to submit vessel stow plan information, rather than submitting it through specialized stow plan software. The number of ISFs indicating use of flexibilities—provisions that allow importers flexibility in the timing and content of submission for certain data elements—has declined over time—from 11 percent of filings in September 2009 to less than 2 percent in June 2010. CBP officials stated that the decrease in flexibility usage can be primarily attributed to the trade industry’s determination that flexibility use is unnecessary due to the existence of CBP’s standard amendment process, which allows filers to update the information in their ISF regardless of whether or not they claim flexibility use. Additionally, importers we interviewed cited a variety of reasons for the nonuse and use of flexibilities. Prior to September 13, 2009, CBP did not have a mechanism to track importers’ intended use of flexibilities and it relied instead on analyses of filed submissions to approximate the use of flexibilities. For example, CBP conducted analyses of filed submissions and concluded that relatively few importers were using the flexibility of not providing either the consolidator element (entity who loads the container) or the container stuffing (packing) location element at the time of initially filing their ISFs. Among initial ISF submissions, 99 percent provided the consolidator element and 99 percent provided the stuffing (packing) location element. To gauge importers’ understanding of the flexibilities, CBP implemented a function in its electronic filing submission system in September 2009 to allow importers to identify their intent to use flexibilities at the time they submit filings. However, this function did not allow CBP to monitor whether an importer was using both flexibilities for a single ISF. Further, beginning in November 2009, CBP adjusted its system for receiving ISFs to allow importers to indicate their intent to use both range and timing flexibilities—the data element submission options provided to importers that allow the submission of a range of acceptable responses and the initial omission of certain data elements, respectively. Prior to this change, importers could only indicate their intent to use one flexibility type, although CBP’s system allowed the data to be entered in a way that utilized both types of flexibilities. According to data obtained through CBP, ISFs indicating an intent to use range flexibilities or timing flexibilities declined from 11 percent of filings each week in September 2009 to 2 percent each week in January 2010. Further, following the start of the enforcement period on January 26, 2010, overall use of the flexibilities has remained low, with importers indicating use of the flexibilities in about 2 percent of ISFs submitted each week from January 26, 2010, through June 14, 2010 (see fig. 3). From September 13, 2009, through June 14, 2010, the percentage of importers that indicated use of the flexibilities on their ISFs declined from more than 13 percent to less than 4 percent. Over the portion of the flexible enforcement period for which CBP has data on importers’ indication of flexibilities use (September 13, 2009, through January 25, 2010), 100,252 of the 1,909,523 submitted ISFs (or 5 percent) indicated flexibilities use. Since the start of the enforcement period on January 26, 2010, through June 14, 2010, the percentage of ISFs for which importers claimed flexibility use remained relatively consistent at about 2 percent of ISF submissions (67,429 of the 3,647,476 filed). Additionally, from December 7, 2009, through June 14, 2010, ISFs that indicated use of both types of flexibilities remained below 0.5 percent of all ISFs each week, which corresponds to about 1 percent or less of importers claiming both types of flexibilities on their ISFs each week. While importers’ use of flexibilities has remained at about 2 percent since January 2010, the percentage of ISFs indicating use of flexibilities that constitute incorrect or unnecessary use of flexibilities has remained consistently high. The system changes CBP implemented to allow importers to indicate their intent to use the flexibilities has enabled CBP to gauge importers’ understanding of the flexibilities by analyzing whether the data provided in ISFs indicating use of the flexibilities are consistent with the flexibilities provisions in the interim final rule. For timing flexibilities, correct use of the flexibilities is indicated by an ISF missing either the consolidator element or the stuffing (packing) location element, or both. For range flexibilities, correct use of the flexibilities is indicated by multiple entries for one or more of the flexible range data elements: manufacturer, ship to party, country of origin, or commodity Harmonized Tariff Schedule number. During the period September 13, 2009, through June 14, 2010, the rate of incorrect or unnecessary use of range flexibilities has remained consistent, at around 70 percent or more of the ISFs that indicated use of the flexibilities. The rate of incorrect or unnecessary use of timing flexibilities declined from 85 percent to 63 percent during the time period September 13, 2009, through January 26, 2010, but has generally remained at around 60 percent or greater since the start of the enforcement period. Thus, while the overall use of flexibilities remains relatively low, the rate of incorrect or unnecessary use of flexibilities has remained consistently high. CBP officials stated that the overall use of flexibilities, as well as the high rates of incorrect use, will inform their consideration of whether to eliminate, modify, or maintain the existing flexibilities associated with the 10+2 rule. Due to the limited use of the flexibilities, CBP officials currently question their utility. CBP officials and trade industry representatives we spoke with stated that CBP’s standard ISF amendment process provides greater flexibility than the timing and range flexibilities provided for in the 10+2 rule. When an importer indicates use of the flexibilities on an ISF, it must submit an updated ISF to indicate that the information is final, regardless of whether the information on the ISF has changed. CBP’s standard amendment process, however, provides more latitude in that it allows the importer to initially submit information on the basis of what it reasonably believes to be true and then requires the importer to update the filing only if any of the information changes or more accurate information becomes available. These updates may be filed any time before goods enter a U.S. port, in contrast to the flexibilities, which require updates no later than 24 hours prior to goods’ arrival at a U.S. port. CBP officials also explained that using the flexibilities could subject importers to additional fees if they are using a third-party filer that charges for each filing because the importers would have to pay for the initial filing in addition to any updated filings. However, if the importer does not use flexibilities, the importer would only be subject to additional filing fees if shipment information changes and use of the standard amendment process is required. Some of the importers we spoke with concurred with the benefits offered by the standard amendment process as compared to use of the flexibilities. Importers we spoke with cited a variety of reasons for not using flexibilities, and one importer cited benefits for using them. Some importers echoed CBP officials’ explanation that the standard amendment process provides more flexibility and can be less costly than using the flexibilities provided for in the interim final rule. Additionally, some importers who are C-TPAT members said they are reluctant to use flexibilities because it could convey to CBP that they do not have complete awareness of their supply chains. Further, some importers cited no need for the flexibilities because they collect all of the required 10+2 data elements prior to the ISF submission deadline. One importer, however, stated that use of the range flexibilities has allowed it to develop a template through which it can submit multiple entries per flexible range element, which in turn improves the efficiency of its submission process. This importer stated it is not concerned about the expense of filing flexibility updates because that cost is expected to be offset by savings associated with automation of its filing process. Data generated by the 10+2 rule are available for use in targeting efforts, such as identification of unmanifested containers, but CBP has not yet finalized the ATS national security weighted rule set—CBP’s primary targeting criteria within ATS for identifying high-risk cargo containers—to identify risk factors present in the ISF data set. Additionally, CBP officials and trade industry representatives report that CBP’s use of the data to enforce rule compliance has not impacted trade flow. CBP targeters have access to data generated by the 10+2 rule, and tactical rules can identify risk factors based on any of the 10+2 data elements. In particular, CBP has updated the TECS rules in ATS to incorporate the additional 10+2 data elements to identify shipments that could pose a threat to national security. ATS uses the updated TECS rules to compare 10+2 data—such as the identities of the buyer, seller, or manufacturer—to certain high-risk TECS national security threats. These rules use the data to affect containers’ risk scores, which can affect whether a shipment is inspected for dangerous cargo. If ATS determines that any of the data elements are connected to high-risk TECS national security threats, it then increases the overall national security weighted rule set risk score for that shipment. For example, CBP officials said that the TECS tactical rules have identified potential risk factors for hundreds of thousands of shipments based on information from the additional 10+2 data elements. Additionally, CBP officials stated that access to vessel stow plans—one of the two data elements provided by carriers—has enhanced CBP’s ability to identify potentially dangerous unmanifested containers—containers and their associated contents not listed on a ship’s manifest that pose a security risk in that no information is known about their origin or contents. CBP officials explained that they are able to use vessel stow plans to mitigate the risk posed by unmanifested containers by taking investigative actions, such as contacting carriers and trade associations to collect missing shipment data or assigning the containers for additional inspection upon reaching a U.S. port. For example, CBP officials stated that from April 22, 2010, through July 14, 2010, targeters used vessel stow plans to identify 1,050 cargo-laden unmanifested containers bound for U.S. ports. Without access to the carriers’ vessel stow plans, CBP officials said that they would not have been able to identify, investigate, and mitigate the risks posed by these potentially dangerous containers. See figure 4 for an example of a cargo-laden container vessel in transit. CBP officials said that they are in the process of updating the ATS national security weighted rule set to identify risk factors in the 10+2 data elements and intend to test them thoroughly prior to implementation, but CBP has not established time frames or milestones for when integration of a finalized weighted rule set will be completed. The finalized national security weighted rule set is intended to analyze relationships between the 10+2 data elements to identify risks in these relationships beyond those that are analyzed by TECS. According to best practices in project management, the establishment of project milestones and time frames can help ensure timely project completion. According to CBP officials, the updated weighted rule set will be tested prior to deployment by executing it in tandem with the existing weighted rule set. This test is intended to determine the ability of the updated weighted rule set to identify all potential risk factors and assign scores based on all available shipment data, including the 10+2 data elements. The test will also determine the number of shipments that would face mandatory examination because of their high risk scores. If the updated weighted rule set does not perform according to specification, or if there is an unexpected change in the number of shipments facing mandatory examination because of their risk scores, CBP plans to review and possibly amend the weighted rule set. CBP plans to continue to retest the amended weighted rule set to ensure that the system is performing according to design and that the flow of trade is not unduly impacted. Thus, until this testing is complete, CBP officials said that they will not be able to determine a date when the finalized weighted rule set will be in place. We recognize that the results of such testing could require adjustments to tasks that make it difficult to adhere exactly to established dates for completing a project. However, establishing milestones and time frames for having the finalized weighted rule set in place could help guide CBP in such testing and provide CBP with goals for completing interim steps and finishing this project, thus better positioning it for targeting high-risk cargo, thereby fulfilling the statutory purpose of the requirement to collect the additional data elements. According to CBP, the potential effectiveness of the additional 10+2 data in enhancing cargo security has been demonstrated in analyses it conducted on cargo containers arriving at the ports of Los Angeles, Long Beach, New York, and Newark in February 2006. The analyses indicated that risk scores assigned while a shipment is in transit, which are based on manifest data, may differ from the final assigned risk scores, which are based on customs entry data. For certain shipments, the difference in the risk scores assigned at these two times, in transit and at arrival, is significant enough to affect CBP’s response to these shipments. For example, twice as many containers were targeted as high risk based on entry data compared to manifest data. Therefore, earlier access to information that approximates entry data could allow CBP to (1) address risk factors before cargo is loaded on U.S.-bound ships at foreign ports, or (2) obtain more information that indicates the cargo is not high risk before the cargo arrives in the United States. The goal of the 10+2 rule is to prevent dangerous shipments from being loaded onto U.S.-bound vessels and CBP may issue “Do Not Load” orders for shipments identified as high risk based on analyses of shipment data. CBP has yet to issue any “Do Not Load” orders as a result of the 10+2 rule and does not plan to begin issuing such orders for ISF noncompliance any earlier than January 2011. According to trade industry representatives, to date, CBP’s use of the additional 10+2 data elements to target noncompliant shipments for inspection has not impacted trade flow. In particular, none of the 30 importers we interviewed stated that their trade flow has been impacted by 10+2 rule enforcement efforts such as shipment inspections or holds. According to CBP officials, individual ports have begun to use the additional 10+2 data elements to target noncompliant shipments for inspection, but CBP cannot identify the number of shipments held specifically due to 10+2 noncompliance because the data it collects do not discern between different types of holds. CBP officials added, though, that they have not received any complaints from the trade industry regarding inspections of noncompliant shipments impacting the flow of trade. According to CBP officials, individual ports make compliance enforcement decisions based on their own discretion. CBP believes that the potential impacts of noncompliance, which can include cargo inspection fees of $100 to $150 and a delay in cargo release of 1 to 3 days, are sufficient incentives for the trade industry to comply with the ISF requirements. As a result, CBP’s current enforcement strategy is to exercise the least punitive measures necessary to obtain full ISF compliance. CBP does not have any plans for initiating mandatory holds on noncompliant shipments and will continue to monitor compliance rates and its application of a measured enforcement approach for the immediate future. CBP officials stated, though, that if CBP determines that additional enforcement actions are necessary, it may consider measures, such as mandatory inspections for all noncompliant shipments. CBP officials added that they do not believe that they would take such actions before November 2010. The stated purpose of the SAFE Port Act requirement for CBP to collect additional data on U.S.-bound cargo is to enhance CBP’s ability to target high-risk cargo containers at an earlier point in the shipping process than can currently be done. To determine the benefits and costs of requiring such additional data, applying best practices, such as those in OMB guidance, to the development of regulatory assessments could help to determine the likelihood that the benefits of a regulation justify the costs and also identify which possible actions would be most cost-effective. To this end, transparency in the assessment regarding why certain alternatives were selected for analysis and how estimates were derived is important to ensure that stakeholders can clearly see how the information in the regulatory assessment informs the regulatory action an agency takes. Furthermore, to achieve the proposed benefits of collecting additional data, CBP would need to incorporate the additional data into its targeting practices. CBP’s regulatory analysis is not transparent regarding how the alternatives were selected for analysis or why the selected alternative is preferable over the others. If CBP publishes an update to its regulatory assessment, as CBP officials said that CBP may do, further transparency could help clarify CBP’s decision making in formulating the 10+2 rule. In addition, a more complete analysis—with further analysis of uncertainty for both costs and benefits, as well as certain costs to foreign entities—could help to provide better information about the circumstances under which benefits justify costs. An update to the regulatory assessment with this additional information could make the assessment more transparent to the trade industry and other stakeholders who are affected by the rule. To accomplish the statutory purpose of collecting the 10+2 data, which is to enhance CBP’s ability to target high-risk cargo containers, CBP plans to update the ATS national security weighted rule set to identify risk factors in 10+2 data. CBP is in the process of updating the ATS national security weighted rule set to identify risk factors in submitted 10+2 data elements, but it has not determined when updates to the ATS national security weighted rule set will be finalized. Establishing milestones and time frames for updating the ATS national security weighted rule set could help guide CBP staff in its efforts and provide CBP with goals for completing interim steps and finishing the project, thereby better positioning it to fulfill the purpose of the SAFE Port Act requirement and enhance its capability to identify high-risk shipments. We recommend that the Commissioner of CBP take the following two actions: If CBP updates its Regulatory Assessment and Final Regulatory Flexibility Analysis, provide greater transparency in the updated assessment regarding the information which contributed to decisions made in developing the 10+2 rule by including information, such as: 1. a discussion of how the alternatives were selected for analysis, including alternatives that were considered but not included in the analysis, and what information CBP considered in addition to the regulatory assessment to conclude that the alternative requiring the Importer Security Filing, with an exemption for bulk cargo, and the Additional Carrier Requirements was preferable over the other alternatives analyzed; 2. an uncertainty analysis for the costs to importers for a day of delay and for the value of statistical life; and 3. to the extent data are available, estimates for lost profits borne by foreign entities. To help guide CBP in updating the ATS national security weighted rule set, establish milestones and time frames for updating the ATS national security weighted rule set to use 10+2 data in its identification of shipments that could pose a threat to national security. DHS provided written comments on a draft of this report, which are reprinted in appendix I. DHS concurred with our two recommendations. Regarding our recommendation to provide greater transparency in an updated regulatory assessment, if CBP publishes such an assessment, DHS concurred. Specifically, it stated that the potential elements we cited for improving transparency will accompany the publication of a final rule for the ISF and Additional Carrier Requirements. Such actions should address the intent of our recommendation, provide greater transparency to the trade industry and other stakeholders, help clarify CBP’s decision-making process, and provide better information about the circumstances under which benefits justify costs. Regarding our recommendation to establish milestones and time frames for updating the ATS national security weighted rule set to use 10+2 data in its identification of shipments that could pose a threat to national security, DHS commented that it had already updated the weighted rule set for certain risk factors, some of which are discussed in this report, and identified requirements for modifying the weighted rule set for other risk factors, many of which it stated have been incorporated into ATS and are available for preliminary evaluation and analysis. Moreover, DHS stated that it has plans to fully integrate these updates by November 2010. Establishing a time frame for fully integrating these updates into ATS provides DHS with a goal for completing the project to fulfill the purpose of the SAFE Port Act requirement to collect additional data and can better position it to effectively target high-risk container shipments. Therefore, although DHS did not specifically discuss actions being taken to establish interim milestones for integrating these requirements, effectively integrating the updates into ATS by November 2010 would address the intent of our recommendation. CBP also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of Homeland Security, and other interested parties. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have questions concerning this report, please contact me at (202) 512-8777, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Key contributors to this report were Christopher Conrad, Assistant Director; Alana Finley, Analyst-in-Charge; Lisa Canini; and Matthew Tabbert. Charles Bausell contributed economics expertise, Stanley Kostyla assisted with design and methodology, Frances Cook provided legal support, and Katherine Davis and Lara Miklozek provided assistance in report preparation. Supply Chain Security: Feasibility and Cost-Benefit Analysis Would Assist DHS and Congress in Assessing and Implementing the Requirement to Scan 100 Percent of U.S.-Bound Containers. GAO-10-12. Washington, D.C.: October 30, 2009. Combating Nuclear Smuggling: DHS’s Program to Procure and Deploy Advanced Radiation Detection Portal Monitors is Likely to Exceed the Department’s Previous Cost Estimates. GAO-08-1108R. Washington, D.C.: September 22, 2008. Supply Chain Security: CBP Works with International Entities to Promote Global Customs Security Standards and Initiatives, but Challenges Remain. GAO-08-538. Washington, D.C.: August 15, 2008 Supply Chain Security: Challenges to Scanning 100 Percent of U.S.- Bound Cargo Containers. GAO-08-533T. Washington, D.C.: June 12, 2008. Supply Chain Security: U.S. Customs and Border Protection Has Enhanced Its Partnership with Import Trade Sectors, but Challenges Remain in Verifying Security Practices. GAO-08-240. Washington, D.C.: April 25, 2008. Supply Chain Security: Examination of High-Risk Cargo at Foreign Seaports Have Increased, but Improved Data Collection and Performance Measures Are Needed. GAO-08-187. Washington, D.C.: January 25, 2008. Maritime Security: The SAFE Port Act: Status and Implementation One Year Later. GAO-08-126T. Washington, D.C.: October 30, 2007. Combating Nuclear Smuggling: Additional Actions Needed to Ensure Adequate Testing of Next Generation Radiation Detection Equipment. GAO-07-1247T. Washington, D.C.: September 18, 2007. Combating Nuclear Smuggling: DHS’s Cost-Benefit Analysis to Support the Purchase of New Radiation Detection Portal Monitors was Not Based on Available Performance Data and Did not Fully Evaluate All the Monitors’ Costs and Benefits. GAO-07-133R. Washington, D.C.: October 17, 2006. Cargo Container Inspections: Preliminary Observations on the Status of Efforts to Improve the Automated Targeting System. GAO-06-591T. Washington, D.C.: March 30, 2006. Homeland Security: Key Cargo Security Programs Can Be Improved. GAO-05-466T. Washington, D.C.: May 26, 2005. Container Security: A Flexible Staffing Model and Minimum Equipment Requirements Would Improve Overseas Targeting and Inspection Efforts. GAO-05-557. Washington, D.C.: April 26, 2005. Preventing Nuclear Smuggling: DOE Has Made Limited Progress in Installing Radiation Detection Equipment at Highest Priority Foreign Seaports. GAO-05-375. Washington, D.C.: March 31, 2005. Cargo Security: Partnership Program Grants Importers Reduced Scrutiny with Limited Assurance of Improved Security. GAO-05-404. Washington, D.C.: March 11, 2005. Homeland Security: Summary of Challenges Faced in Targeting Oceangoing Cargo Containers for Inspection. GAO-04-557T. Washington, D.C.: March 31, 2004. Homeland Security: Preliminary Observations on Efforts to Target Security Inspections of Cargo Containers. GAO-04-325T. Washington, D.C.: December 16, 2003. Container Security: Current Efforts to Detect Nuclear Materials, New Initiatives, and Challenges. GAO-03-297T. Washington, D.C.: November 18, 2002.
Cargo containers present significant security concerns given the potential for using them to smuggle contraband, including weapons of mass destruction. In January 2009, U.S. Customs and Border Protection (CBP), within the Department of Homeland Security (DHS), implemented the Importer Security Filing (ISF) and Additional Carrier Requirements, collectively known as the 10+2 rule. Collection of cargo information (10 data elements for importers, such as country of origin, and 2 data elements for vessel carriers), in addition to that already collected under other CBP rules, is intended to enhance CBP's ability to identify high-risk shipments. As requested, GAO assessed, among other things, (1) the extent to which CBP conducted the 10+2 regulatory assessment in accordance with Office of Management and Budget (OMB) guidance, (2) how CBP used information it collected and assessed to inform its efforts to implement the 10+2 rule since January 2009, and (3) the extent to which CBP has used the additional 10+2 data to identify high-risk cargo. GAO analyzed relevant laws, OMB guidance, and CBP's 10+2 regulatory assessment, and interviewed CBP officials. CBP's 10+2 regulatory assessment generally adheres to OMB guidance, although greater transparency regarding the selection of alternatives analyzed and a more complete analysis could have improved CBP's assessment. CBP's regulatory assessment addresses some elements of a good regulatory assessment, as required by OMB, such as the need for the proposed action and evaluation of the benefits and costs. However, the assessment lacks transparency in that it does not explain how the four alternatives considered for the rule--variations in what and how many data elements are to be collected--were selected or how the preferred alternative was chosen. OMB guidance states that regulatory analyses should clearly explain the assumptions used in the analysis. If, as CBP officials stated, an update might be published in the future, greater transparency could help justify the scope of alternatives analyzed in the regulatory assessment and provide insight into CBP's decision making. Further, a more complete analysis of the uncertainty involved in estimating key variables used to evaluate costs and benefits could have improved CBP's regulatory assessment by providing better information about the circumstances under which benefits justify costs. CBP officials said that to the extent that data are available, this information could be added to an updated regulatory assessment to improve its completeness. CBP is using information it has collected, assessed, and shared with the trade industry to monitor and help improve compliance with and implementation of the 10+2 rule. For example, CBP collects daily information on the ISF compliance of importers' shipments at each U.S. port to monitor the status of ISF implementation, as well as data on vessels arriving in U.S. ports for which carriers did not supply information such as the position of each cargo container (stow plans). CBP data indicate that in July 2010, approximately 80 percent of shipments were ISF compliant, and CBP officials said that most carriers had submitted stow plans. CBP publishes answers to frequently asked questions on its Web site and has conducted outreach sessions with the trade industry to discuss errors in ISF submissions and help improve compliance. The 10+2 rule data elements are available for identifying high-risk cargo, but CBP has not yet finalized its national security targeting criteria to include these additional data elements to support high-risk targeting. CBP has assessed the submitted 10+2 data elements for risk factors, and according to CBP officials, access to information on stow plans has enabled CBP to identify more than 1,000 unmanifested containers--containers that are inherently high risk because their contents are not listed on a ship's manifest. CBP has conducted a preliminary analysis that indicates that the collection of the additional 10+2 data elements could help determine risk earlier in the supply chain, but CBP has not yet finalized its national security targeting criteria for identifying high-risk cargo containers or established project time frames and milestones--best practices in project management--for doing so. Such efforts could help provide CBP with goals for finishing this project, thus better positioning it to improve its targeting of high-risk cargo. GAO recommends that CBP should, if it updates its regulatory assessment, include information to improve transparency and completeness, and set time frames and milestones for updating its national security targeting criteria. DHS concurred with these recommendations.
The 1985 through 1995 period saw an increase in both the number of college students and the proportion of the college-aged population in colleges, universities, training schools, and other postsecondary institutions. In 1995, more than 34 percent of all 18- to 24-year-old U.S. residents were attending postsecondary schools, compared with slightly less than 28 percent in 1985. Many who attend also plan to stay longer: Two-thirds of college freshmen now intend to go beyond a baccalaureate degree, compared with about half in 1980. In part, this interest in postsecondary education likely reflects students’ recognition that such education is associated with higher incomes later in life. Bureau of the Census statistics indicate that, on average, households headed by persons with bachelor’s degrees have average incomes nearly 70 percent higher than households headed by persons with no more than high school diplomas. Households with a member that has a professional degree have incomes that average about three times those of households in which members’ highest certificate is a high school diploma. As the number of students has increased, so has the size of the government’s student loan programs. By the end of fiscal year 1996, the estimated outstanding amount of loans provided by the Department of Education’s two largest loan programs, the principal sources of loans for postsecondary education, had reached $112 billion, up from $91 billion a year earlier and from $65 billion in 1990, in constant 1995-96 dollars. The Higher Education Amendments of 1992 increased the maximum amount that students could borrow. For example, the limit for graduate and professional students rose from $74,750 to $138,500 (in current dollars, including both graduate and undergraduate loans). Borrowing and working both play significant roles in how students pay for their education. Figure 1 shows how an “average” full-time student met the cost of his or her education at various types of postsecondary institutions in school year 1995-96. Together, borrowing and working constituted more than half of the amount of funds students needed to pay their cost of attendance at all types of schools, except private 4-year schools. While this “average” view is instructive as a way to see the general role of student borrowing and working patterns, it does not show the wide range of methods students use to finance college. Some students do not borrow or work at all, while others earn more than enough to cover the cost of attending college. To provide a more complete picture, our report focuses on those students who borrow and those who work, showing the annual and cumulative amounts of their borrowing and the number of hours worked per week while they were enrolled. The proportion of students who borrowed to finance the cost of postsecondary education increased between school years 1992-93 and 1995-96, and the amounts they borrowed increased, after taking inflation into account. In general, this was true for both undergraduates and graduate and professional students. An increasing percentage of undergraduates in all types of programs turned to borrowing to finance part of their education. To provide as complete a picture as possible of how students used borrowing during their entire period of enrollment, we focused our analysis on undergraduates who had completed their 2-year, 4-year, or other programs in 1992-93 and 1995-96. In 1992-93, 41 percent of undergraduates who completed their programs had borrowed in 1 or more years. By 1995-96, this number had risen to 52 percent. The percentage varied, however, by type of degree or certificate, with the greatest increase in the group receiving bachelor’s degrees (see table 1). The average amount borrowed by undergraduates completing their program (excluding those who had not borrowed) rose from about $7,800 to about $9,700 over the 1992-93 to 1995-96 period, after adjusting for inflation. The amounts borrowed by those receiving bachelor’s degrees in 1995-96 were the highest. Among bachelor’s degree recipients, the portion of students who had borrowed $20,000 or more for the 1992-93 through 1995-96 time period rose from about 9 percent to about 19 percent of graduating seniors who had borrowed; see table 2. (See tables II.2, II.3, and II.4 for supporting data, including confidence intervals (degree of precision) for the estimates.) The most substantial increases in the number of graduating students who borrowed occurred at public schools. At 4-year public schools, the percentage of graduating seniors who borrowed in 1 or more years rose from 42 percent in 1992-93 to 60 percent in 1995-96 (see table 3.) This increase eliminated the earlier difference between public and private 4-year schools in the percentage of students borrowing in 1 or more years—public school students “caught up” to private school students in terms of the percentage of the group that borrowed. Students at private schools, however, still borrowed larger amounts during both school years. Students graduating from public schools offering less than 4-year degrees also borrowed in substantially higher numbers, although the average amount borrowed changed little after taking inflation into account. In the aggregate, borrowing by graduate and professional students also increased. In 1992-93, about 55 percent of graduate and professional students who completed their degrees had borrowed in 1 or more years, and those who had borrowed had a cumulative debt (for graduate, professional, and undergraduate education) averaging $16,990. By 1995-96, about 62 percent of this group borrowed, and their cumulative debt averaged $24,340. Students in professional programs were the most likely to borrow and had the highest levels of debt. For 1995-96, students completing professional programs had an average debt of $59,909, and the percentage of students who borrowed more than $50,000 had increased from 34 percent to 60 percent. (See table 4.) Changes in students’ employment have been less pronounced than changes in borrowing. Compared with 1992-93, the percentage of full-time undergraduate students who worked while attending school rose slightly, while the percentage of graduate and professional students who worked generally declined. Among those who worked, the average number of hours remained relatively steady. Most full-time undergraduate students worked during the school year in both 1992-93 and 1995-96. The percentage of full-time students who worked rose in all three program categories—certificate or award, associate degree, and bachelor’s degree. Overall, during 1995-96 more than two-thirds of full-time undergraduates worked while enrolled. On average, undergraduates worked 23 hours per week; however, this varied considerably by program, with students in associate and certificate or award programs working the most. The average number of hours worked per week did not change appreciably from 1992-93, although it rose somewhat among students completing associate degree programs. (See table 5.) At 4-year and proprietary schools, the percentage of full-time, full-year undergraduates who worked during the 1995-96 school year was substantially higher than the percentage who worked in 1992-93. (See table 6.) Average hours worked per week did not change significantly. (See tables II.6, II.7, and II.8.) Students in master’s and doctoral programs in school year 1995-96 were more likely to work, and to work more hours per week, than were students in professional programs. Working students in professional programs averaged about 20 hours of work a week, while those in master’s and doctoral programs averaged about 25 to 30 hours per week. Many of these students held jobs in their field of study, such as teaching or research assistance. About 80 percent of full-time doctoral students who worked while enrolled said they held positions directly related to their studies, compared with about 63 percent of students in master’s programs and about two-thirds of students in professional programs. However, even though more students in master’s and professional programs worked in off-campus jobs than did doctoral students, most of them still regarded their jobs as closely related to their field of study. (See table 7.) To gain a better understanding of student work and borrowing patterns during school year 1995-96, we analyzed amounts borrowed and hours worked by several factors, including type of school, cost of attendance, year in school, dependency status, gender, family income, race/ethnicity, cost of attendance, and expected family contribution. We focused this analysis on undergraduate students because the data for graduate and professional students did not produce statistically meaningful results when divided into many of the categories and subcategories we analyzed. (See app. III for further details on our analyses.) To help identify the relationships between the various factors selected for analysis, we conducted a series of regression analyses. Regression analysis is a statistical technique that can analyze many factors at the same time and estimate their relationship to a given outcome. In this case, our analyses were directed at determining what factors, if any, help predict the amount of money that students borrowed. Our results indicated that none of the factors we examined are strong predictors of the amount of student borrowing. Not surprisingly, the most influential factor that emerged from our analyses was the cost of the school attended. However, this factor accounted for only about 11 percent of the difference in the amounts of borrowing that occurred, after controlling for other factors. Several factors that helped account for smaller portions of the variation in amounts borrowed were the student’s class level (freshmen, sophomore, junior, or senior), the amount of grant aid received, and whether the student was independent. Other factors included in the analysis (type of school, race/ethnicity, adjusted gross family income percentile, expected family contribution, and hours worked per week while enrolled) accounted for little, if any, variation. Together, all of the factors we examined accounted for about 31 percent of the variation in the amounts borrowed. The relationship we saw in the regression analysis between the cost of the school attended and the amounts borrowed is also apparent in comparing graduating seniors’ average cost of attendance for 1995-96 and the average cumulative amount of their borrowing. As shown in figure 2, seniors whose annual school costs were $20,000 or more borrowed an average cumulative amount of about $18,000. In contrast, the comparable amount borrowed was about $11,000 for those whose annual school costs were between $5,000 and $9,999. Appendix IV has additional data on variations in the average cumulative amounts borrowed by undergraduates and variations in the proportion who borrowed in 1 or more years as undergraduates. For example, appendix IV shows variations in these factors by student year in school, by race/ethnicity, and by parental income. As with borrowing, we conducted a regression analysis to determine which factors, if any, would be strong predictors of how much students will work. We used the same list of factors as we did for borrowing, but in this case, different factors emerged as important. Dependency status and type of school accounted for little of the variation in hours worked (3.1 percent and 2.5 percent, respectively). Of students who worked, those who worked more hours tended to be the independent students. On average full-time, full-year independent students who were employed while enrolled worked about 28 hours per week, compared with an average of about 21 hours for their dependent counterparts. (See table II.10 for further details.) Other factors included in our regression analysis each accounted for less than 1 percent of the variation in hours worked. (For additional data on variation in work patterns, see apps. III and IV.) In contrast with the substantial amount of information about students’ own borrowing experiences, little information is available about the amounts that parents borrow to pay for their children’s postsecondary education. In general, studies that provide data on parents’ education debt were dated or limited in scope, and they often failed to differentiate between postsecondary education debt and other types of education debt. We found three studies that come closest to describing the debt parents incur for their children’s postsecondary education. Of these, the Department of Education’s work contained the most useful information. The best available data are in the Department of Education’s NPSAS, which we used as the basis for information on student borrowing and work patterns. As part of this survey, which is conducted periodically, the Department collected some information through telephone interviews with samples of parents. However, changes in NPSAS questions included in its 1995-96 survey did not provide similar data that allowed for comparisons with earlier survey results. The most recent NPSAS (1995-96) included parents’ responses related only to certain groups of undergraduates, such as dependent students who did not receive financial aid or those whose schools’ files did not include parents’ adjusted gross income. Since such a sample of parents would not be representative of parents of all undergraduates, estimates based on responses from that year’s survey are not included here. The 1992-93 NPSAS provided a more wide-ranging sampling of parents selected to represent a group of graduating seniors. Parents of between 8 and 11 percent of seniors under 24 years of age who graduated in 1992-93 reported borrowing to help finance their child’s education during 1992-93. The average amount parents borrowed for these seniors for 1992-93 was between $10,734 and $14,553. Sources of borrowing included home equity loans, home equity lines of credit, signature loans, state- or school-sponsored parent loans, loans against life insurance policies and retirement funds, commercial loans, and federal PLUS loans. Parents have borrowed a rapidly increasing amount of loan funds through the Department of Education’s PLUS program. Parents of about 5 percent of dependent undergraduate students participated in this program during 1995-96, about the same portion as in 1992-93. Among dependent students who graduated as seniors in 1995-96, about 10 percent had parents who had used the program during 1 or more years of their child’s postsecondary schooling. The average cumulative amount they borrowed was about $9,748. NPSAS results indicate that the average was about $9,022 for parents of students at public 4-year schools and $10,673 for those at private 4-year schools. Amounts of PLUS borrowing have also risen in recent years, reflecting the influence of higher loan limits. According to the Department, the average amount of these loans increased by about 55 percent (from $3,588 to $5,556 in constant 1995-96 dollars) over the 1992-93 to 1995-96 period. In the Higher Education Amendment of 1992, limits on the amount of PLUS loans were lifted. Currently, eligible parents may borrow, regardless of financial need, up to their student’s cost of attendance, less the amount of other financial aid received. Other survey data suggest that education has been an important use of funds obtained from home equity loans. Excluding first mortgages, U.S. home equity debt totaled about $255 billion in 1993, $110 billion of which was in home equity lines of credit and $145 billion in traditional home equity loans. According to a school year 1993-94 survey by the University of Michigan, among borrowers using home equity lines of credit, about 21 percent indicated that some or all of these loan funds were used for education, up from 18 percent in 1988. Among borrowers using traditional equity loans, about 7 percent indicated that some or all of the funds were used for education, up from 5 percent in 1988. The survey did not indicate what portion of these funds went for children’s postsecondary education and how much may have been used for other educational uses, such as private elementary or secondary schools. The Federal Reserve Board’s surveys of U.S. households indicate that education debt was about 1.9 percent of U.S. household debt in 1989 and about 2.5 percent in 1992 and 1995. However, the surveys are not designed to capture parents’ debt for their children’s postsecondary education. The survey does not make a distinction between debt for postsecondary education and debt for elementary and secondary education, nor does it distinguish between debt owed by parents for a child’s education and debt owed by parents for their own education. The Department of Education reviewed a draft of this report and had no formal comments, although it provided several technical suggestions that we incorporated as appropriate. We are sending copies of this report to the Secretary of Education, appropriate congressional committees, and other interested parties. Please call me at (202) 512-7014 if you or your staff have any questions regarding this report. Major contributors included Joseph J. Eglin, Jr., Assistant Director; Charles M. Novak; Benjamin P. Pfeiffer; and Dianne L. Whitman-Miner. To analyze working and borrowing patterns among postsecondary students, we reviewed literature and data from the Department of Education and other sources, such as various professional associations. The data we analyzed included the Department of Education’s periodic National Postsecondary Student Aid Study (NPSAS), the Federal Reserve Board’s Survey of Consumer Finances, Claritas Inc.’s (a private research firm) survey on use of credit cards, and the University of Michigan’s National Survey of Home Equity Loans. In connection with this effort, we also interviewed Department of Education officials and staff of professional associations and the Federal Reserve Board. The Department’s NPSAS addresses how students and their families pay for postsecondary education and involves nationally representative samples of all students in postsecondary institutions. In 1995-96, for example, the Department selected a sample of over 950 institutions and about 50,000 students. The researchers gathered data about students from schools’ institutional records and the Department’s records (including financial aid applications and the National Student Loan Data System). They also gathered information by telephoning a subsample of about 27,000 undergraduates and about 4,000 graduate and professional students. We focused our analysis on the average amounts of borrowing and average cumulative debt reported in the NPSAS for school years 1992-93 and 1995-96. Unless otherwise indicated, the term “debt” in this report refers to the cumulative total of the principal amounts borrowed by students for undergraduate education (borrowing for all the costs of attendance, including room and board). The data on the amount of students’ cumulative debt were self-reported and, according to the Department’s NPSAS project officer, the extent to which it includes credit card debt is unknown. The portion of college students with credit cards rose from about one-half to about two-thirds from 1990 to 1996, according to a study by Claritas Inc. The estimated aggregate average balance grew from about $900 in 1990 to about $2,250 in the third quarter of 1997. (These amounts have not been adjusted for inflation, and Claritas Inc. did not provide confidence intervals for these numbers.) Average annual amounts of borrowing came from NPSAS analysis of school records for over 50,000 undergraduate students and Department of Education records for students with federal student loans. Data on cumulative debt came from telephone surveys of about 27,000 respondents to NPSAS telephone surveys. About 1,500 of these were graduating seniors. The 1989-90 NPSAS survey did not identify students who completed their degree program in that year, so we limited our analysis of those data to a comparison of 1992-93 and 1995-96 survey results. We did use the 1989-90 survey as a point of comparison for the overall portion of undergraduates who worked while enrolled. Similarly, we focused our analysis of undergraduate students’ work patterns on students included in NPSAS’ 1992-93 and 1995-96 surveys who enrolled as undergraduates for their first term during the May 1 through April 30 time period, and attended full-time for a full year (9 months). To assess the number of hours worked by undergraduate students while enrolled, we used NPSAS for 1992-93 and 1995-96. These data came from a computer-aided telephone interview. To assess parents’ borrowing for their child’s postsecondary education, we used parent responses to NPSAS’ 1992-93 survey, Federal Reserve Board data from its Survey of Consumer Finances for 1995, and the University of Michigan’s National Survey of Home Equity Loans. Our analysis of graduate and professional students included those in NPSAS who were enrolled in a postbaccalaureate program that began between May 1 and April 30 in the 1992-93 or 1995-96 NPSAS years. Data on hours worked while enrolled came from NPSAS telephone interviews with about 4,000 students. We limited our analysis of hours worked and earnings to those who were enrolled full time for a full year (9 months). Data on cumulative borrowing came from NPSAS telephone interviews of about 2,800 graduate students and about 1,200 professional students. Because we were unable to identify students who were in the last year of their graduate or professional degree program in school year 1989-90 or who completed their degree during that year, we limited our analysis of graduate and professional degree students’ cumulative debt to 1992-93 and 1995-96. Analysts use various statistical techniques to help evaluate the relative strength of relationships that can be found in sets of data. To calculate confidence intervals for survey results, we used standard errors provided by the Department and a 95-percent confidence interval. Similarly, we tested for the statistical significance of differences between groups using t-tests and a p = 0.05 criterion. To further assess statistical relationships between variables discussed in this report, we performed two linear regression analyses with the following dependent variables: (1) the amount undergraduates borrowed for 1995-96 and (2) the average hours full-time, full-year undergraduates worked per week while enrolled during 1995-96. To indicate the extent to which borrowing and debt have changed at a rate faster or slower than changes in consumer prices, we analyzed levels of cumulative borrowing in constant 1995-96 dollars. To calculate constant 1995-96 dollars we used the Bureau of Labor Statistics’ Consumer Price Index for all urban consumers. We conducted our work from April to December 1997 in accordance with generally accepted government auditing standards. Because the Department uses several methods to check and review NPSAS data and these data are widely relied upon in the education community, we did not validate the reliability of the data derived from the sources indicated. The tables in this appendix contain additional details regarding the information presented in the letter portion of this report. The tables present category-by-category estimates for various aspects of student debt and work, along with confidence intervals for each. The estimated averages shown are based on analysis of the results from a sample of students. The confidence intervals are the ranges in which the averages are likely to fall for the entire population of postsecondary students within the category indicated. The table notes indicate whether differences in the estimated averages for various sample groups are statistically significant. We identified differences as statistically significant when our statistical tests showed less than a 5-percent chance that the differences between groups occurred purely by chance. Degree or other credential received 1995-96 dollars) $5,171 $4,758 - $5,584 Percentage of graduates borrowing in 1 or more years a total of $20,000 or more 15.7 - 22.2 dollars) Table II.5: Students in Graduate and Professional Programs Who Borrowed, Amount Borrowed, and Percentage With $50,000 or More Debt, School Years 1992-93 and 1995-96 undergraduate, graduate, or recipients who borrowed in 1 or a total of $50,000 or more for 1995-96 dollars) Although our work focused primarily on the extent to which borrowing and working by undergraduates varied by each of several factors (type of school and year in school, for example), we sought more information about the extent to which these factors were predictive. To do this, we performed a series of regression analyses. Each analysis indicates what portion of variance in the working or borrowing variable examined was accounted for by each factor after taking the other factors into account. In tables III.1 and III.2, the portion of the variance accounted for is the change in the portion of variance accounted for (R expressed as a percentage) after adding each variable to the model after including (controlling for) all the other variables listed. “Total accounted for” is the percentage of variance accounted for including all variables listed. This is the coefficient of determination, a statistic that indicates how well a statistical model fits the data. If there is no linear relationship between dependent and independent variables, R equals 1 (100 percent). The regression coefficients (B) shown in each table indicate the extent to which a change in each independent variable is associated with a change in the dependent variable. For example, in table III.1, the regression coefficient for graduating seniors is $1,632.75. This indicates that after taking into account the relationships between all the variables listed, graduating seniors borrowed an average of about $110.67 less than freshmen in their first year of postsecondary education (the reference category). The standardized regression coefficients (beta) shown in each table are statistics that are standardized to allow comparison when the independent variables are measured in different units. They help analysts compare the extent to which variables help predict variation in the dependent variable, such as the amount borrowed. The unit of measure for beta weights is a standard deviation in the dependent variable. (Standard deviations are measures of the extent to which, for example, the amounts students borrowed typically differed from the average amount borrowed.) A beta weight is an estimate of the number of standard deviations more a student is expected to borrow for a one standard deviation increase in an independent variable (see table III.1). The significance test (probability based on the t-statistic) in each table indicates, for the addition of each variable in the model, the probability that the statistical relationship between each independent variable and the variation in the dependent variable not accounted for by other variables is due to random factors. In the analysis of the statistical relationship between each dependent variable and each categorical variable, such as year in school, we identified a reference category. The tables provide regression statistics that indicate the extent to which nonreference groups compare statistically with the reference group. In both tables, reference groups are white non-Hispanic, dependent, men, first-time beginning freshmen, and attending public 4-year schools. Regression coefficient (B) Standardized regression coefficient (beta) Type of school (portion of variance accounted for = 0.18%) Class level (portion of variance accounted for = 6.46%) Freshmen—not beginning postsecondary education for the first time 0.0004 < 0.0001 < 0.0001 < 0.0001 < 0.0001 (continued) Regression coefficient (B) Standardized regression coefficient (beta) Racial/ethnic group (portion of variance accounted for = 0.24%) Other portion of variances accounted for was 0.3068, with 11,171 degrees of freedom. Regression coefficient (B) Standardized regression coefficient (beta) Type of school (portion of variance accounted for = 2.55%) Class level (portion of variance accounted for = 0.65%) Freshmen - not beginning postsecondary education for the first time Racial/ethnic group (portion of variance accounted for = 0.51%) Other portion of variances accounted for (Table notes on next page) These numbers reflect an adjustment for design effect. was 0.1681, with 8,682 degrees of freedom. We were also asked to analyze borrowing and work patterns in relation to four other factors: students’ class level, students’ dependency status, parental income, and students’ race/ethnicity. This appendix contains our findings with respect to these factors and concludes with tables that provide additional information to supplement that shown in tables IV.1 through IV.5 and figure IV.1. Not surprisingly, students who had attended school for several years were more likely to have borrowed—and in greater amounts—than their counterparts who had not been in school as long. A greater portion of students borrowed at all undergraduate levels, and the amounts they borrowed increased to a statistically significant extent for everyone but freshmen (see table IV.1). Students who were classified by the Department of Education’s financial aid needs analysis process as dependent on their parents were less apt to borrow than those who were classified as independent, but when they borrowed, they tended to borrow larger amounts. Among seniors graduating in 1995-96, 51 percent of those who were dependent on their parents borrowed in 1 or more years, compared with 71 percent of independent students. On average, dependent students borrowed $13,754, compared with $12,842 for independent students. Comparing 1995-96 graduating dependent seniors with their counterparts in 1992-93, borrowing was up across all income levels. As figure IV.1 shows, borrowing tended to be most common among dependent students from families whose annual income is less than $45,000. However, the portion of dependent students who borrowed increased at all family income levels, and at the highest level ($100,000 and above), it nearly doubled from 16.3 percent to 32.6 percent. The increase in amounts borrowed was relatively uniform among all income groups except the lowest and highest. A larger portion of dependent students borrowed at most levels of parental income . . . Percentage of Dependent Students Who Borrowed in Each Income Category . . . and amounts increased across nearly every income category Amount Borrowed (Dollars in Thousands) Analysis of cumulative borrowing by race and ethnicity showed that all four groups analyzed (white, not Hispanic; black, not Hispanic; Hispanic; and Asian/Pacific Islander) showed similar increases in the portion of students borrowing. Average cumulative amounts borrowed ranged from about $11,910 for Hispanics to about $16,531 for students with Asian and Pacific Islander backgrounds. Greater portions of black and Hispanic groups borrowed than whites. (See table IV.2.) seniors who borrowed in 1 or Information is only available for 1992-93. Information was collected in 1995-96 that included American Indians/Alaskan Natives, but the data were insufficient for analysis. Increases in the percentage of students working were reflected across all undergraduate years (see table IV.3). As in 1992-93, juniors and seniors enrolled full time in 1995-96 were more apt to work while enrolled, but on average worked slightly fewer hours than freshmen or sophomores. A higher percentage of both dependent and independent students worked in 1995-96 compared with 1992-93. For 1995-96, the percentage was higher for dependent students, but among those students who worked, independent students worked about 6 hours more per week. Among dependent undergraduates, students in all income groups were more apt to work while enrolled in 1995-96 than in 1992-93 (see table IV.4). Those whose parents were in middle-income groups were more likely to work while enrolled. The average number of hours worked changed little and varied little by income group. The percentage of students who worked in 1995-96 was higher than the percentage for 1992-93 across racial and ethnic groups as well (see table IV.5). White, black, and Hispanic students had the highest percentages of students who worked, and black and Hispanic students had the highest average hours worked per week. The following tables provide data supporting the preceding figure and tables, along with additional information, including confidence intervals for each estimate. Average total principal borrowed (constant 1995-96 dollars) Parental income (constant 1995-96 dollars) Parental income (constant 1995-96 dollars) Amount borrowed (constant 1995-96 dollars) Percentage of graduating seniors who borrowed Information is only available for 1992-93. Information was collected in 1995-96 that included American Indians/Alaskan Natives, but the data were insufficient for analysis. Average total principal borrowed (constant 1995-96 dollars) Information was only available for 1992-93. Information was collected in 1995-96 that included American Indians/Alaskan Natives, but the data were insufficient for analysis. Estimated average hours worked per week while Estimated percentage of students who worked while 1994 parental income (constant 1995-96 dollars) $15,000 - $29,999 $30,000 - $44,999 $45,000 - $59,999 $60,000 - $74,999 $15,000 - $29,999 $30,000 - $44,999 $45,000 - $59,999 $60,000 - $74,999 $75,000 - $99,999 60.6 - 70.1 1994 parental income (constant 1995-96 dollars) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on: (1) the changes that have occurred in recent years in the percentage of undergraduate and graduate/professional students who borrow and in the cumulative amount of their borrowing; (2) the changes that have occurred in the percentage of undergraduate and graduate/professional students who work and the number of hours they work; (3) how undergraduate borrowing and work patterns differ by type of school, year in school, dependency status, family income, and race/ ethnicity; and (4) information concerning the amounts of education debt parents incur. GAO based its review in large part on an analysis of data collected by the Department of Education as part of the National Postsecondary Student Aid Study. GAO noted that: (1) over the past several years, students have turned increasingly to borrowing to cope with rising education costs; (2) at the undergraduate level, the percentage of postsecondary students who had borrowed by the time they completed their programs (received a bachelor's degree, associate degree, or award or certificate) increased from 41 percent in 1992-93 to 52 percent in 1995-96; (3) the average amount of debt per student increased from about $7,800 to about $9,700 in constant 1995-96 dollars; (4) for graduating seniors (recipients of bachelor's degrees) and who had borrowed, the average rose from about $10,100 to about $13,300; (5) the portion of these graduates with $20,000 or more of student debt grew from 9 percent to 19 percent during the period; (6) students attending 4-year public institutions showed the largest increase in the number of borrowers; (7) sixty percent of seniors graduating from these schools in 1995-96 borrowed at some point in their program, up from 42 percent in 1992-93 and about even with the percentage of borrowers at private 4-year schools; (8) students at 2-year public institutions borrowed least often and in lesser amounts; (9) at the graduate and professional levels, the percentage of borrowers and the level of debt generally increased; (10) higher borrowing levels were especially pronounced at professional schools, where average debt among borrowers completing their programs climbed from about $45,000 in 1992-93 to nearly $60,000 in 1995-96; (11) more full-time undergraduates worked while attending school in 1995-96 than in 1992-93; (12) more than two-thirds of full-time undergraduate students held jobs during 1995-96, working an average of 23 hours a week while enrolled; (13) at graduate and professional schools, the percentage of full-time students who worked changed little over the same period; (14) about two-thirds of master's and doctoral students worked, usually in part-time jobs directly related to their field of study; (15) at professional schools, less than half worked while enrolled; (16) some variations in borrowing and work patterns can also be seen on the basis of the cost of attendance, dependency status, family income, and gender; (17) however, most characteristics are not very strong predictors of how much undergraduates were likely to borrow or work; (18) little information is available about amounts of debt parents accumulate in order to pay for their children's postsecondary education; and (19) in general, household debt for education remains a small share of household debt.
Through fiscal year 1998, about $172 million has been allocated to the ACTD program and 48 projects have been approved. DOD’s budget request for fiscal year 1999 for the ACTD program is $116.4 million. An additional 10 to 15 projects are expected to be funded in fiscal year 1999. Under the current ACTD program, DOD builds prototypes to assess the military utility of mature technologies, which are used to reduce or avoid the time and effort usually devoted to technology development. Demonstrations that assess a prototype’s military utility are structured to be completed within 2 to 4 years and require the participation of field users (war fighters). ACTD projects are not acquisition programs. The ACTD program seeks to provide the war fighter with the opportunity to assess a prototype’s capability in realistic operational scenarios. From this demonstration, the war fighter can refine operational requirements, develop an initial concept of operation, and make a determination of the military utility of the technology before DOD decides whether the technology should enter into the normal acquisition process. Not all projects will be selected for transition into the normal acquisition process. The user can conclude that the technology (1) does not have sufficient military utility and that acquisition is not warranted or (2) has sufficient utility but that additional procurement is not necessary. Of the 11 ACTD projects completed as of August 1998, 2 were found to have insufficient utility to proceed further, 8 were found to have military utility but no further procurement was found to be needed at the time, and 1 was found to have utility and has transitioned to the normal acquisition process. ACTD funding is to be used to procure enough prototypes to conduct the basic demonstration of military utility. At the conclusion of the basic demonstration, ACTD projects are expected to provide a residual operational capability for the war fighter. Under the current practice, ACTD funding is also to be available to support continued use of ACTD prototypes that have military utility for a 2-year, post-demonstration period. The 2 years of funding is to support continued use by an operational unit and provide the time needed to separately budget for the acquisition of additional systems. Further, if the ACTD prototypes—such as missiles—will be consumed during the basic demonstration, additional prototypes are to be procured. As stated in the ACTD guidance, a key to successfully exploiting the results of the demonstration is to enter the appropriate phase of acquisition without loss of momentum. ACTDs are intended to shorten the acquisition cycle by reducing or eliminating technology development and maturation activities during the normal acquisition process. Further, DOD can concentrate more on technology integration and demonstration activities. Time and effort usually devoted to technology development can be significantly reduced or avoided and the subsequent acquisition process reduced accordingly, if the project is deemed to have sufficient military utility. ACTD candidates are nominated from a variety of sources within the defense community, including the Commanders in Chief, the Joint Chiefs of Staff, the Office of the Secretary of Defense agencies, the services, and the research and development laboratories. The candidates are then reviewed and assessed by staff from the Office of the Deputy Under Secretary of Defense (Advanced Technology). After this initial screening, the remaining candidates are further assessed by a panel of technology experts. The best candidates are then submitted to the Joint Requirements Oversight Council, which assesses their priority. The final determination of the candidates to be funded is made within the Office of the Deputy Under Secretary of Defense (Advanced Technology), with final approval by the Under Secretary of Defense (Acquisition and Technology). By limiting consideration to prototypes that feature mature technology, the ACTD program avoids the time and risks associated with technology development, concentrating instead on technology integration and demonstration activities. The information gained through the demonstration of the mature technology could provide a good jump start to the normal acquisition process, if the demonstration shows that the technology has sufficient military value. Time and effort usually devoted to technology development could be reduced or avoided and the acquisition process shortened accordingly. Program officials stated that they have a mechanism in place to ensure that only those projects using mature technology are allowed to become ACTDs. These officials explained that an ACTD candidate’s technology is assessed by high-ranking representatives from the services and the DOD science and technology community before candidates are selected. Program personnel stated that determining technology maturity is important before a candidate is selected because ACTD program funding is not intended to be used for technology development. According to program guidance, the ACTD funding is to be used for (1) costs incurred when existing technology programs are reoriented to support ACTD, (2) costs to procure additional assets for the basic ACTD demonstration, and (3) costs for technical support for 2 years of field operations following the basic ACTD demonstration. We were told that no ACTD money was to be used for technology development activities. However, the project selection process does not ensure that only mature technologies enter the ACTD program. We found examples where immature technologies were selected and technology development was taking place after the approval and start of the ACTD program. The current operations manager of the Combat Identification project, which began in fiscal year 1996, told us that one of his major concerns has been that some of the ACTD funding was being used for technology development, and not exclusively used for designing and implementing the assessment. However, during the ACTD project, technical or laboratory testing was still necessary to evaluate the acceptability of many of the 12 technologies included in the initial project. Eventually, 6 of the 12 technologies had to be terminated. According to the demonstration manager, 2 of the 6 technologies were terminated because they were immature. According to the manager, that is one of the reasons the project is currently behind schedule. Another example of the inclusion of immature technology occurred in the Outrider Unmanned Aerial Vehicle project. According to the management plan for the project, one of the individual technologies to be incorporated into the vehicle was a heavy fuel engine. According to a program official, it was later deemed that this individual technology was too immature and an alternate technology had to be used. However, trying to use this immature technology has already caused schedule slippage and cost overruns in the ACTD project. To complete the basic demonstration within the prescribed 2 to 4 year period, ACTDs typically use early prototypes. If the demonstrated technology is deemed to have sufficient military utility, many ACTD projects will still need to enter the normal acquisition process to complete product and concept development and testing to determine, for example, whether the system is producible and can meet the user’s suitability needs. These attributes of a system go beyond the ACTD’s demonstration of military utility to address whether the item can meet the full military requirement. Commercial items that do not require any further development could proceed directly to production. However, other non-software related ACTDs should enter the engineering and manufacturing development phase to proceed with product and concept development and testing. According to ACTD guidance, if further significant development is needed, a system might enter the development portion of the engineering and manufacturing development phase. However, the guidance states that, if the capability is adequate, the ACTD can directly enter production. The guidance does not specifically define what is considered an “adequate capability” to allow an ACTD system to enter low-rate production. In 1994, we reported on numerous instances of weapon systems that began production prematurely and later experienced significant operational effectiveness or suitability problems. In our best practices report, we reported that typically DOD programs allowed much more technology development to continue into the product development phase than is the case in commercial practices. Turbulence in program outcomes—in the form of production problems and associated cost and schedule increases—was the predictable consequence of DOD’s actions. In contrast, commercial firms gained more knowledge about a product’s technology, performance, and producibility much earlier in the product development process. Commercial firms consider not having this type of knowledge early in the acquisition process an unacceptable risk. In responding to that report, the Secretary of Defense stated that DOD is vigorously pursuing the adoption of such business practices. Specifically, he stated that DOD has taken steps to separate technology development from product development through the use of ACTDs. The ACTD guidance and DOD’s current practice do not appear to reflect this emphasis. In the case of the Predator ACTD, the one ACTD that has proceeded into production, DOD decided to enter the technology into production before proceeding with product and concept development and testing, thereby accepting programmatic risks that could offset the schedule and other benefits gained through the ACTD process. In the early operational assessment of the Predator’s ACTD demonstration, the Director, Operational Test and Evaluation, did not make a determination of the system’s potential operational effectiveness or suitability. However, the system was found to be deficient in several areas, including mission reliability, documentation, and pilot training. The assessment also noted that the ACTD demonstration was not designed to evaluate several other areas such as system survivability, supportability, target location accuracy, training, and staffing requirements. The basic ACTD demonstration may have clarified the Predator’s military utility but it did not demonstrate its system requirements or its suitability. Thus, instead of using the knowledge acquired during the demonstration to complete the Predator’s development through the product and concept development and testing stages of acquisition, DOD allowed it to directly enter production. DOD’s practice is to procure sufficient ACTD prototypes to provide a 2-year residual capability. When it determines that the original prototypes will be consumed during the basic demonstration, additional prototypes are procured for potential use after the basic ACTD demonstration. However, these additional assets—like the basic demonstration prototypes—have not been independently tested to determine their effectiveness and suitability. Procuring additional ACTD prototypes before product and concept development and testing is completed risks wasting resources on the procurement of items that may not work as expected or may not have sufficient military utility. Representatives from the service test agencies did not support this practice and agreed that it had the potential for problems. Without a meaningful independent assessment of a product’s suitability, effectiveness, and survivability, users cannot be assured that it will operate as intended and is supportable. Congress has expressed concern about the amount of equipment being procured beyond what is needed to conduct the basic ACTD demonstration. Its concern is that DOD is making an excessive commitment to production before military utility is demonstrated and before appropriate concepts of operation are developed. For example, DOD plans to procure 192 Enhanced Fiber Optic Guided missiles at an estimated cost of $27 million and 144 Line-of-Sight Anti-Tank missiles at an estimated cost of $28 million beyond the quantities of missiles required for the ACTD demonstrations—64 and 30 missiles, respectively. The production of these additional missiles will follow the production of the missiles needed for the basic demonstration and will continue on a regular basis throughout the 2-year, post-demonstration period. If the prototypes are deemed to have sufficient military utility, the service involved will be expected to fund the production of additional missiles beyond these quantities. By establishing a regular pattern of procurement in this way, DOD risks committing to a continuing production program before a determination is made about the technology’s military utility and before there is assurance that the system will meet validated requirements and be supportable. The strength of the ACTD program is in conducting basic demonstrations of mature technology in military applications before entering the normal acquisition process. This practice could significantly reduce or eliminate the time and effort needed for technology development from the acquisition process. For this to occur, it is essential that DOD use only mature technology in its ACTDs. DOD’s criteria for selecting technologies for ACTD candidates should be clarified to ensure the selection of mature technology with few, if any, exceptions. Further, ACTDs may not, by themselves, result in an effective and safe deployment of military capability. It is important that product and concept development as well as test and evaluation processes be allowed to proceed before the service commits to the production of the demonstrated technology. If an ACTD project is shown to have military value, the normal acquisition processes can and should be tailored—but not bypassed—before DOD begins production. Lastly, emphasizing the need to complete concept and product development and testing before procuring more items than needed for the basic demonstration would reduce the risk of prematurely starting production. We recommend that the Secretary of Defense clarify the ACTD program guidance to (1) ensure the use of mature technology with few, if any, exceptions and (2) describe when transition to the development phase of the acquisition cycle is necessary and the types of development activity that may be appropriate. Further, we recommend that the Secretary of Defense limit the number of prototypes to be procured to the quantities needed for early user demonstrations of mature technology until the item’s product and concept development and testing have been completed. “. . . new technologies proposed for incorporation into an ACTD should not be in the 6.1 (basic research) or 6.2 (applied research) budget categories. Furthermore, the technologies must have been successfully demonstrated at the subsystem or component level and at the required performance level prior to the start of the ACTD.” While this guidance is improved over previous versions, the new guidance permits the selection of immature technology—even as the primary or core technology—provided that it is demonstrated prior to the ACTD demonstration. Also, some recent ACTD projects have been approved without the technologies having been identified. Moreover, the new guidance goes on to describe several types of exceptions under which immature technologies may be permitted to be used in an ACTD. As our report states, the use of immature technologies has delayed programs and we continue to believe DOD needs to focus the ACTD program on the use of mature technology with few, if any exceptions. DOD also agreed that some but not all ACTDs may require additional product and concept development before proceeding into production. DOD states that a mandatory engineering and manufacturing development phase would not be appropriate for all ACTD projects. We agree, however, the existing ACTD guidance focuses on the transition directly to production and provides too little guidance concerning a possible transition to development. As stated in our recommendation, the guidance should specify when a transition to development may be appropriate and the kinds of developmental activities that may be appropriate. Finally, DOD agreed that the number of ACTD prototypes to be procured should be limited until the Under Secretary can confirm that sufficient testing has been satisfactorily completed to support any additional procurement. We agree with DOD that test results should form the basis for starting limited procurement. However, DOD’s equating a determination of military utility (based on an ACTD demonstration) with a determination of a system’s readiness to begin production is inappropriate because production decisions require more testing data. We have long held the view and have consistently recommended that DOD use extreme caution to avoid premature commitments to production. To determine the adequacy of the ACTD program’s selection criteria in assessing technology maturity and guidance for transitioning to the normal acquisition process, we reviewed existing program guidance, published reports, the Office of the Inspector General’s April 1997 ACTD report, and the recommendations of the 1986 Packard commission and the 1996 Defense Science Board. We discussed selection criteria, transitioning to the acquisition process, and all 34 of the individual ACTD programs approved through fiscal year 1997 with representatives from the Office of the Deputy Under Secretary of Defense (Advanced Technology), Washington, D.C.; the Army’s Deputy Chief of Staff for Operations and Plans, Office of Science and Technology Programs, Washington, D.C.; the Air Force’s Director for Operational Requirements, Rosslyn, Virginia; the Navy’s Requirements and Acquisition Support Branch, Washington, D.C.; the Marine Corps’ Combat Development Command Office of Science and Innovation, Quantico, Virginia; the Joint Staff’s Acquisition and Technology Division and Requirements Assessment Integration Division, Washington, D.C.; and the Office of the Commander in Chief, U.S. Atlantic Command, Norfolk, Virginia. We discussed the issue of procuring additional residual assets for early deployment with representatives from DOD’s Office of the Director, Operational Test and Evaluation, Washington, D.C.; the Army’s Test and Evaluation Management Agency, Washington, D.C.; the Army’s Operational Test and Evaluation Command, Alexandria, Virginia; the Marine Corps’ Operational Test and Evaluation Activity, Quantico, Virginia; the Air Force’s Test and Evaluation Directorate, Washington, D.C.; and the Navy’s Commander, Operational Test and Evaluation Force, Norfolk, Virginia. We conducted our review from September 1997 to July 1998 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to other interested congressional committees; the Secretaries of Defense, the Army, the Air Force, and the Navy; the Commandant of the Marine Corps; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-4841, if you or your staff have any questions concerning this report. The major contributors to this report were Bill Graveline, Laura Durland, and John Randall. The following are GAO’s comments on the Department of Defense’s (DOD) letter, dated August 31, 1998. “. . . new technologies proposed for incorporation into an ACTD should not be in the 6.1 (basic research) or 6.2 (applied research) budget categories. Furthermore, the technologies must have been successfully demonstrated at the subsystem or component level and at the required performance level prior to the start of the ACTD.” “. . .Strategies and approaches are described to facilitate transitioning from an ACTD to the acquisition process as defined in DOD 5000.2R. The suggested approaches are based on lessons learned. The focus of the suggestions are ACTDs that are planned—if successful—to enter the acquisition process at the start of LRIP.” Although there is a basic recognition that the transition to development may be possible, the bulk of the guidance is on how and when to transition to production. As pointed out in the report, the guidance does not describe when a transition to development or what types of development activity may be appropriate. In our view, the guidance needs to be more balanced between the possibility of transition to development and the transition of ACTD projects directly to production. 5. As discussed in the report, the independent operational testing agencies are observers in the ACTD demonstrations and not active participants. While the Office of the Director of Operational Test and Evaluation was an observer during the Predator demonstration, a determination was not made that Predator was potentially effective and suitable. 6. We agree that ACTDs address the technology’s suitability. However, the ACTD focus on suitability is in a very general sense and extensive data is not collected on the system’s reliability, maintainability, and other aspects of suitability needed to support production decisions. 7. As our report states, the Predator was rushed into low-rate initial production prematurely given the limited amount of testing conducted at that time and the problems that were uncovered during that limited testing. 8. DOD’s equating a determination of military utility (based on an ACTD demonstration) with a determination of a system’s readiness to begin production is inappropriate because production decisions require more testing data. During our review, we noted that sufficient information was not obtained from an ACTD demonstration to make a commitment to limited production. Commercial practice would dictate that much more information be obtained about a product’s effectiveness, suitability, producibility, or supportability before such a commitment is made. We believe the ACTD guidance needs to be more balanced and should anticipate that ACTD prototypes may need to conduct more product and concept development and testing prior to production. We have long held the view and have consistently recommended that DOD use extreme caution to avoid premature commitments to production. 9. We are not suggesting that a lengthy development phase be conducted on all ACTD products nor, as DOD appears to suggest, that an ACTD prototype may be ready to start limited production immediately after its basic demonstration. As DOD stated in its intent to establish the ACTD program, we believe the benefit of the ACTD process is in eliminating or reducing technology development, not in making early commitments to production or in postponing product and concept development and testing activities until after production starts. 10. While ACTD demonstrations are performed in operational environments, they are not operational tests. During the course of our work, we held several discussions with officials from the operational test community. Those officials were in favor of the user demonstrations featured in the ACTD program, but none considered those demonstrations as substitutes for operational testing because of their informality, lack of structure, and the lack of a defined requirement by which to measure performance. 11. DOD appears not to recognize the very real possibility that the ACTD demonstration may find the technology in question to have little or no military utility or to be unaffordable in today’s budgetary and security environment. In fact, due to budget constraints, the Army was forced to prioritize its procurement programs, and the planned procurement funding for Enhanced Fiber Optic Guided missiles has been reallocated. 12. While we agree with DOD that test results should form the basis for starting limited procurement, the testing needed goes beyond the basic demonstration of military utility provided by the ACTD program. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the current Advanced Concept Technology Demonstration (ACTD) program, focusing on: (1) whether the selection process includes criteria that are adequate to ensure that only mature technologies are selected for ACTD prototypes; (2) whether guidance on transitioning to the normal acquisition process ensures that a prototype appropriately completes product and concept development and testing before entering production; and (3) the Department of Defense's (DOD) current practice of procuring more ACTD prototypes than needed to assess the military utility of a mature technology. GAO noted that: (1) through the determination of military value of mature technologies and their use in the acquisition process, ACTDs have the potential to reduce the time to develop and acquire weapon systems; (2) however, several aspects of the ACTD program can be improved; (3) DOD's process for selecting ACTD candidates does not include adequate criteria for assessing the maturity of the proposed technology and has resulted in the approval of ACTD projects that included immature technology; (4) DOD has improved its guidance on the maturity of the technologies to be used in ACTD projects but the revised guidance describes several types of exceptions under which immature technologies may be used; (5) where DOD approves immature technologies as ACTD program candidates and time is spent conducting developmental activities, the goal of reduced acquisition cycle time will not be realized; (6) further, guidance on entering technologies into the normal acquisition process is not sufficient to ensure that a prototype completes product and concept development and testing before entering production; (7) the guidance does not mention the circumstances when transition to development may be appropriate or the kinds of developmental activities that may be appropriate; (8) while commercial items that do not require any further development could proceed directly to production, many ACTDs may still need to enter the engineering and manufacturing development phase to proceed with product and concept development and testing before production begins; (9) through the ACTD early user demonstration, DOD is expected to obtain more detailed knowledge about its technologies before entering into the acquisition process; (10) however, in the one case in which an ACTD has proceeded into production, DOD made that decision before completing product and concept development and testing, thereby accepting programmatic risks that could offset the schedule and other benefits gained through the ACTD process; (11) DOD's current practice of procuring prototypes beyond those needed for the basic ACTD demonstration and before completing product and concept development and testing is unnecessarily risky; and (12) this practice risks wasting resources on the procurement of items that may not work as expected or may not have sufficient military utility and risks a premature and excessive commitment to production.
There are approximately 1.1 million school-age dependents of military parents in the United States and an increasing number of these dependents have a parent deployed overseas. While DOD operates 194 schools for military dependents in seven states, two territories, and in 12 countries, DOD estimates the majority of military dependent students attend U.S. public schools operated by local school districts. Because of their family situations, military dependent students may face a range of unique challenges, such as frequent moves throughout their school career and the emotional difficulties of having deployed parents. Figure 1 is a photo from a school we visited with about 90 percent military dependent students that showed the global locations of students’ previous and future residences. Military dependent students often find stability in the school routine during the challenges of deployment and the resulting disruptions to daily life, according to a DOD publication. Appropriations for Education’s Impact Aid program, reauthorized and incorporated into Title VIII of the Elementary and Secondary Education Act of 1965 (ESEA), were almost $1.3 billion in fiscal year 2010, and DOD provided $41 million in additional funding for DOD Impact Aid. DOD Impact Aid was established in the early 1990s to supplement the Education Impact Aid program which, as we testified at that time, was underfunded (i.e., meaning that appropriations did not fully fund authorizations). Together, the programs are intended to compensate school districts for revenue losses resulting from federal activities and to maintain educational standards for all students. Federal activities that can affect revenues or the ability to maintain standards include federal ownership of property within a district as well as the enrollment of children whose parents work or live on federal land (e.g., military bases). Education Impact Aid funds are awarded in formula grants based on various types of federally connected children in the school district and other measures. If appropriations are not sufficient to provide funding at the level for which all districts qualify, funding is reduced with more heavily impacted districts receiving higher percentages of their maximum payments than less impacted districts. Of the more than 14,000 school districts nationwide, 902 received Education Impact Aid payments for federally connected children in fiscal year 2009. Because Impact Aid payments are not aimed at specific educational goals, accountability requirements for the use of funds or for specific outcomes are minimal. DOD Impact Aid, administered by DOD’s Education Activity (DoDEA) Educational Partnership office is intended to supplement the much larger Education Impact Aid program. All districts that receive DOD Impact Aid also receive Education Impact Aid. There are no statutory requirements mandating that school districts report on the use of these funds. DOD Impact Aid has three distinct funding components for school districts with military dependent students. These funding components are: Supplemental assistance. These funds are allocated to school districts in which military dependents made up at least 20 percent of average daily attendance during the previous school year. Data from Education’s Impact Aid application are used to determine a district’s eligibility. About 120 districts receive funds from DOD Impact Aid Supplemental assistance annually. Total amounts awarded to all districts combined have ranged from $30 to $40 million in each fiscal year from 2002 through 2010, and the funding has been included by Congress in DOD’s annual appropriation for operation and maintenance for defensewide activities. Assistance for children with severe disabilities. Funds are allocated to school districts with at least two military dependent children with severe disabilities where the costs exceed certain criteria. The funding is a reimbursement for expenses paid, and is sent to the school districts after the expenses are incurred. According to a DOD official, approximately 40– 50 school districts that apply and meet the cost criteria are awarded funds each year out of the 400–500 school districts that are potentially eligible. Total amounts awarded to all districts combined have generally ranged from $4 to $5 million in each fiscal year from 2002 through 2010. Assistance for districts significantly affected by BRAC. Funds are allocated to school districts that have been heavily impacted as a direct result of large scale military rebasing. Beginning in the late 1980s, the U.S. military has attempted to streamline the nation’s defense infrastructure through a series of base realignments and closures. For example, as part of the 2005 BRAC round, DOD has relocated or plans to relocate more than 120,000 military and DOD civilian personnel by September 2011. In addition, DOD and local community officials expect thousands of dependents to relocate to communities near the BRAC 2005 growth bases. Thus, several U.S. bases could each see the addition of more than 10,000 military and DOD civilian personnel, along with their families and children. To qualify for these DOD Impact Aid BRAC funds, districts must have had at least 20 percent military dependent students in average daily attendance during the previous school year and have had an overall increase or decrease of 5 percent or more of these students, or an increase or decrease of no less than 250 military dependent students at the end of the prior school year. No school district is permitted to receive more than $1 million in assistance in a fiscal year. In fiscal years 2006 and 2007, 45 districts received BRAC funding from DOD Impact Aid totaling $15 million. Although authorized, funding was not provided in fiscal years 2002, 2008, 2009, and 2010 (see table 1). In addition to DOD Impact Aid, DOD provides other assistance to school districts and military families for school-age children through the following programs: DoDEA grants to schools. DoDEA has two programs that provide grants for military-connected schools nationwide. These grant programs began in 2008, and are authorized through fiscal year 2013. Unlike the Supplemental Impact Aid program, the DoDEA grants are targeted for specific uses and have specific evaluation requirements. The competitive grant program aims to enhance student achievement, provide professional development for educators, and integrate technology into curricula at schools experiencing growth in numbers of military dependent students. The invitational grant program aims to enhance student achievement and ease challenges that military dependent students face due to their parents’ military service. Through these two programs, DoDEA awarded approximately $56 million to 40 schools in fiscal year 2009, and approximately $38 million to 32 schools in fiscal year 2010. Military family life consultants. DOD’s Office of the Deputy Under Secretary of Defense for Family Policy, Children, and Youth administers the Military Family Life Consultant program, which provides counseling services to faculty, staff, parents, and children in school districts with a high percentage of parent deployments. The program began in fiscal year 2004 as a demonstration program, and received $150 million in fiscal year 2009 and $259 million in fiscal year 2010. Working as DOD contract employees, these consultants typically assist with issues including school transitions, adjustment to deployments and reunions, and parent-child communication. In addition, consultants try to promote a culture that encourages service members and their families to seek counseling or other assistance when they have a problem. As of fall 2010, there were more than 200 consultants supporting 297 schools and 105,000 military dependent students worldwide. School liaison officers. Each service branch—the Army, Marine Corps, Navy, and Air Force—administers the School Liaison Officer program, which provides military commanders with the support necessary to coordinate assistance to and advise military parents of school-age children on educational issues and assist in solving education-related problems. In fiscal year 2010, the Army spent $14.7 million on its program, the Marine Corps $2.1 million, and the Navy $3.6 million. A school liaison officer’s responsibilities include promoting military parents’ involvement in schools, assisting children and parents with overcoming obstacles to education that stem from the military lifestyle, and educating local communities and schools on the needs of military children. As of fall 2010, there were more than 250 school liaison officers assisting DOD and military-connected public schools throughout the world, and more than 150 of those were in the United States, all of whom are disbursed across the service branches. The Army reported funding 141 school liaison officers, the Marine Corps 24, the Navy 58, and the Air Force 82. Tutor.com. Since the end of 2009, DOD has provided children of active duty military with free, unlimited access to online tutoring, academic skills courses, and homework assistance in math, science, social studies, and English for kindergarten through 12th grade (K–12) students through Tutor.com. The program received $2 million in fiscal year 2009. Professional tutors assist military dependent students with completing homework, studying for standardized tests, and writing papers. Some tutors are career specialists who can assist with resume writing and job searches. The program provided 162,570 sessions during fiscal year 2010. Heroes at Home for preschool-age children. Heroes at Home, a pilot program established in fiscal year 2007, seeks to assist active duty parents of preschool-age children at military installations with significant transition or deployment activities. The program provides research-based curriculum and training for parent educators, who then work with other parents to help them mitigate any risk to children’s well-being or educational readiness posed by military life. Over a 3 year period, Heroes at Home has served more than 1,900 military families and almost 2,400 children from birth until kindergarten. The program has received $3.4 million since fiscal year 2008. Activities supported by the funding ended in September 2010, but will continue at some installations through other funding mechanisms and existing programs. In addition to Education and DOD Impact Aid and other DOD assistance for military dependent children, school districts may also qualify for other funding from Education. For example, a district may receive funding through Title I, Part A of ESEA, which authorizes financial assistance to school districts and schools with high numbers or high percentages of economically disadvantaged children. Funding may also come through the Individuals with Disabilities Education Act, which provides formula grants to states and school districts for children ages 3–21 who have a disability that impacts their education. Little is known about the specific use and effectiveness of DOD Impact Aid Supplemental funds because most school districts place the aid into their general fund to support salaries, maintenance, and operation of schools. In our survey of school districts that received DOD Impact Aid Supplemental funds in any year from 2001 through 2009, of the 87 school districts that reported receiving funds for the 2009–2010 school year, 85 percent put at least some of their award in their general fund. Approximately 15 percent of reported funds went to a capital project fund, about 11 percent to a special revenue fund, and about 5 percent to another account (see fig. 2). When asked to provide a brief description of how DOD Impact Aid Supplemental funds were spent, survey respondents reported using them for salaries, supplies, technology, transportation, heating and cooling systems, and capital upgrades. School districts reported using, on average, about 77 percent of their general fund for salaries and benefits. The general fund was also used to pay for supplies, property services (such as operations, maintenance, and repair of district-owned property), and other services such as food and transportation (see fig. 3). DOD Impact Aid Supplemental funds are not required by statute to be used for specific purposes or to be targeted directly to military dependent students. Further, there are no tracking or reporting requirements on the expenditures of funds and, as a result, there is no way to determine specifically how the funds are used. However, school districts that expend $500,000 or more are subject to a financial audit in accordance with the Single Audit Act. Fewer than 20 percent of the districts that responded to our survey reported using a separate accounting code to track expenditures of DOD Impact Aid Supplemental funds. School districts that completed our survey had mixed opinions regarding how easy or challenging it is or would be to track how they spend DOD Impact Aid Supplemental funds. Thirty-nine percent of districts receiving these funds said it would be easy for them to track the funds’ use. For example, some districts already put their DOD Impact Aid Supplemental funds into a separate fund or have an accounting system that can track spending using a unique code. One school district official said in the survey that the district would simply designate its DOD Impact Aid Supplemental funds for a particular expenditure, such as 25 percent of its total expenditures for counseling services, if tracking and reporting were required. However, an equal percentage of districts in our survey said that tracking exactly how funds are spent would be challenging and time consuming because their accounting systems are not set up to do so, and their funds are used for multiple programs and needs (see fig. 4). In addition, we heard from several district officials that the amount of money received by districts is so small—less than 2 percent, on average, of a district’s total budget—that additional resources to account for the funds would not be justified. One district official from Colorado said that DOD Impact Aid funding is too small and too unpredictable to dedicate specifically to military dependent students or to fund special staff or programs. Officials in four of the seven school districts that we interviewed and 19 survey respondents commented on the flexibility afforded by DOD Impact Aid funding. Many of these districts appreciated the flexibility of these funds because they can spend the money how they deem most beneficial for their district. Flexible funding is particularly important now, some school officials said, because of state cuts to education budgets in recent years. In another 2010 GAO survey of school districts on stimulus spending, an estimated one-third reported budget cuts in the 2009–2010 school year and nearly one in four reported cutting jobs, even with American Recovery and Reinvestment Act of 2009 funds. Several school districts we contacted reported using DOD Impact Aid Supplemental funds to pay for necessities that would have otherwise been cut due to less funding from the state. Fifty-one percent of survey respondents said if they did not receive DOD Impact Aid Supplemental funds for the 2010–2011 school year, they would likely or very likely make cuts or adjustments to instructional staff (see fig. 5). Forty-six percent reported that they would likely or very likely make cuts or adjustments to technology expenditures, and 42 percent reported that supplies and classroom materials would likely or very likely be cut. One school district official said if his district did not receive the funds, it would prioritize expenditures and any consideration of possible staff reductions would be taken very seriously, but used as a last resort. Another school district reported that since this funding is small, a one-year loss would impact technology and supplies, but staffing would only be affected if the funds were lost going forward. When we asked school district officials in our survey if the DOD Impact Aid Supplemental funding is effective in improving the quality of education provided to military dependent students, 66 percent strongly agreed. One district official from Texas told us that while DOD Impact Aid Supplemental funding is not a significant amount of money compared to that of the Education Impact Aid program, it is “the icing on the cake” for addressing the unique needs of their military dependent students. In addition, several school district officials we contacted said the funding is very important and allows the district to improve the quality of education. For example, the funds enabled one school district to make enhancements to their educational programs, offer new programs, and upgrade facilities. Sixty-seven percent of the districts responding to our survey strongly agreed that DOD Impact Aid Supplemental funding serves its purpose by compensating them for some of the tax and other revenues lost due to a federal presence in the district. Yet, only 16 percent strongly or somewhat agreed that the amount of DOD Impact Aid Supplemental funding received is adequate. Further compounding the difficulty of efforts to evaluate the effectiveness of DOD Impact Aid funds, we found a lack of national data on military dependent students in general. There are no national public data on military dependent students’ academic progress, attendance, or long-term outcomes, such as college attendance or workplace readiness. DoDEA officials told us the only data currently available on this population come from the Impact Aid forms completed by parents, which provide information on whether a student is federally connected or not. Federal agency officials and a military education advocacy group have expressed interest in having more data collected about military dependent students, as it is for other public school cohorts. ESEA, amended and reauthorized by the No Child Left Behind Act of 2001, designates four specific groups of students as reportable and accountable subgroups: economically disadvantaged, major racial and ethnic groups, those with disabilities, and those with limited proficiency in English. The legislation holds states, school districts, and individual schools accountable for the achievement of all students, including students in these four subgroups. While some senior Education officials have acknowledged the importance of obtaining these data for military dependent students, they have not yet determined what, if any, concrete actions they will take. Similarly, the Military Child Education Coalition, a nonprofit organization focused on ensuring quality educational opportunities for all military dependent children, is working with DOD and Education to explore ways to use existing capacities to create processes for collecting and analyzing data on all students of active duty, National Guard, and Reserves families. While DOD Impact Aid funds are not targeted for use for military dependent students only, collecting this information could help serve these students better. Senior representatives from Education and the Military Child Education Coalition explained that without more specific data, educators, base commanders, and community leaders are not able to provide military dependent students with appropriate resources because they do not have information on their specific educational needs or the effectiveness of the schools and programs serving them. Further, these data could help military families make more informed decisions about where to enroll their children by identifying how well specific schools educate military dependent students. For example, military families may in some cases choose whether to live on or off a base, and may choose which school district their children will attend, depending on the quality of the schools. A senior Education official also emphasized that this information could shed light on practices that work well generally in educating other highly mobile students, such as homeless or migrant students. In addition, using data on military dependent students in a longitudinal database would allow researchers to better understand these students’ academic achievement and educational outcomes over time and the factors that might affect them. At the same time, some groups representing school districts have expressed concerns about making military dependent students a reportable subgroup. These concerns include creating an additional reporting burden and new costs for school districts and concerns about singling out military dependent students as a unique group. However, Education officials did not anticipate excessive cost or burden for school districts to collect and report these data. Officials at three quarters of the school districts responding to our survey reported that issues associated with military dependent students’ frequent moves to new schools were moderately, very, or extremely challenging. In addition, 58 percent reported meeting the needs of military dependent students with disabilities was moderately, very, or extremely challenging. In our survey of these school districts, three of the top four challenges reported by districts responding to our survey were related to the mobility of military families. Mobility increased academic needs due to differences in state and district curricula, lack of connectedness with school, and behavioral issues in the classroom. Serving students with special needs was another important challenge faced by the school districts in our survey. These challenges, as well as the emotional toll faced by students as a result of frequent moves, were echoed in the interviews we held with selected school districts. A smaller percentage of survey respondents also reported lack of participation by parents, transportation to and on bases, and transitioning of teachers and staff who are in military families, among other challenges (see fig. 6). Key issues associated with the mobility of military dependent students identified by school districts we contacted were different state and district academic curricula and standards, lack of student and family connectedness to school, and behavioral and emotional issues of students, most often related to a parent’s deployment or absence. Different Academic Curricula and Standards The largest challenge reported by school districts in our survey was the increased academic need of children in military families who transfer to a school with different curricula or academic standards than those in their previous school and thus need additional support. Forty-one percent of school districts rated increased academic needs due to differences in curricula between districts and/or states as extremely or very challenging, and 32 percent said it was moderately challenging. States use different curricula and have different graduation and academic standards and assessment practices, sometimes making it difficult for a receiving school to integrate new students. For example, one school district official we interviewed noted the state requires 25 classes to graduate from high school, whereas other states require only 20 classes, which has created challenges for incoming juniors and seniors. These inter-district differences can extend to the placement of students in special education or gifted programs. A school district official in one state, for example, told us that some students who received special education services in their previous state no longer qualified for these services. While the district works to provide adequate supports within the classroom, the official said it is sometimes difficult to explain to students and their families why they no longer qualify for services to which they are accustomed. These challenges are compounded when the records from the sending district do not arrive on time or are incomplete—an issue identified as a challenge by some districts. In addition, mobility often results in classes with a high degree of student turnover each year, creating an extra burden on teachers to orient new students to class material, assess their academic abilities, and provide extra support, as needed. Officials at five of the schools we interviewed told us that each year at least one-third of their student population turns over. A principal of an elementary school in Colorado told us only one out of 57 fifth graders has been with the school since kindergarten. Because this turnover takes place throughout the school year, teachers must spend time continually absorbing and integrating new students into their classrooms, which reduces the time available for instruction. We found very few generalizable studies that systematically examined the academic and behavioral effects of mobility for military students specifically. National student level achievement data on military dependent students are also not available, so it is difficult to link achievement and mobility. However, we recently reported that mobility is one of several interrelated factors, including socio-economic status and lack of parental education, which have a negative effect on academic achievement. In addition, some of the studies we reviewed found that the effect of mobility on achievement also varied depending on such factors as the student’s race or ethnicity, special needs, grade level, frequency of school change, and characteristics of the school change—whether it was between or within school districts, or to an urban district from a suburban or rural one. Lack of Connectedness with School Military dependent students’ lack of connectedness with their school due to frequent moves was reported as extremely or very challenging by 24 percent of school districts in our survey, and moderately challenging by 34 percent. Frequent moves make it difficult, for example, for students to get involved with extracurricular activities or sports if they move after the tryout season. Officials we interviewed from one school in Texas said they allowed children to try out for extracurricular activities by sending a video before they arrived, and another allowed newly arrived military dependent students to try out for teams mid-season. Students are not guaranteed their same position (e.g., quarterback) which can be disappointing, but they will be given an opportunity to try out for the team. Officials also said limited child care options and lack of transportation to the military base limit students’ ability to attend after-school events. School liaison officers in another school district similarly attributed the lack of public transportation on base to families feeling isolated and having difficulty attending extracurricular activities. Officials in 23 percent of districts responding to our survey reported transportation was at least moderately challenging. Related to a student’s lack of connectedness is lack of parental involvement. School principals we interviewed in Colorado said military parents tend to avoid school involvement partly because they anticipate leaving in a few years. The lack of parental involvement is particularly troubling for district officials because they feel that parents need to be part of the school community for success in educating their students. Finally, related to mobility, 13 percent of survey respondents reported that transitioning of teachers and staff from military families who work at schools when military families are reassigned was extremely or very challenging, and 27 percent reported this was moderately challenging. Officials in two school districts told us that hiring military spouses is advantageous because they have first hand experience with military issues and can relate well to military dependent students. However, when the military spouses leave the school district it creates more inconsistency in the education of military dependent students. Officials in 24 percent of school districts in our survey said behavioral issues in the classroom, such as aggression—which may be attributable to frequent moves and parent deployment—were extremely or very challenging, and 31 percent said they were moderately challenging. Officials we interviewed in six of the seven districts said there is an emotional toll faced by students as a result of frequent school transfers. In one school district in Virginia, approximately 60 percent of students who started at a school are no longer there at graduation. Officials in this district found that frequent moves are a significant hindrance to the academic and emotional success of military dependent students. Some officials said mobility-related emotional issues tend to be more challenging for high school students, who may have more trouble fitting in and meeting academic requirements for graduation. The students we spoke with at one high school, many of whom were military dependents and had moved frequently, agreed that transitioning to new schools was most difficult during high school because social groups are already firmly established. School district officials we interviewed also identified emotional and behavioral challenges connected to parent deployment, absence, and in some cases, the death of a parent. In particular, officials we interviewed at two school districts near Army bases noted an increase in emotional and behavioral issues, including student truancy and tardiness, in recent years. Specifically, school officials near Army bases in Colorado and Missouri agreed that students’ misbehavior and acting out has increased in recent years and is currently at chronic levels. One superintendent noted that her county has lost more than 300 soldiers in the Afghanistan and Iraq conflicts. A school counselor added that reintegration when the absent parent returns can also be stressful as families re-establish rules and dynamics. Some districts noted that the leave soldiers take upon return from deployment resulted in long student absences. While district officials we spoke with wanted to be accommodating to reunited families, they noted that these student absences were taking an academic toll. Officials in these two districts said that teachers have found themselves fulfilling the role of social worker for military dependent students, a position they felt underqualified to fill. A 2010 study examining the well-being and deployment difficulties of more than a thousand families with military children aged 11–17 found they tended to have more emotional difficulties compared to national samples. The study found that older children had a greater number o school, family, and peer-related difficulties during deployment, and girls of all ages reported more challenges during both deployment and deployed parent reintegration. Both the length of parental deployment and poor mental health of the nondeployed caregiver were significantly associated with a greater number of challenges for children both during deploymen t and deployed-parent reintegration. Fifty-eight percent of survey respondents cited serving students with special needs as extremely or very challenging (36 percent) or moderately challenging (22 percent). We heard similar views in our interviews. For example, a special education director in one district we visited said that the difficulties most military dependent students face in transitioning frequently to and from schools are exacerbated for special education students given their greater instructional and other needs. Serving students with disabilities in public schools is a challenge for many school districts nationwide because these students are increasingly taught in mainstream classrooms. In 2009 we found that state and local school district officials believed classroom teachers were generally unprepared for teaching students with disabilities and a number of state and district officials wanted a stronger focus in teacher preparation programs on instruction of children with disabilities. DOD Impact Aid’s Children with Severe Disabilities program reimburses school districts serving military dependent students with severe disabilities, but a number of school districts we contacted said the application for reimbursement is burdensome, in some cases taking numerous hours for school districts to complete. According to a DoDEA official, approximately 10 percent of the school districts that serve two or more military dependent children with special needs and establish that they meet the cost criteria submit an application each year. In accordance with statutory requirements, payment calculations require, among other things, determinations of average per pupil expenditure in the state as well as nationally. According to some school districts, calculations and application requirements are time consuming and require them to list specific costs expended on services for each eligible child. One director of special education told us that the process of applying for the Children with Severe Disabilities reimbursement takes about 80–90 hours of staff time. She explained that collecting the information requires obtaining data from occupational and physical therapists, and from other offices including transportation and special education. When there is staff turnover among any of these contacts, the process takes even longer. Officials from two districts we interviewed said the amount of the reimbursement was very small compared to the difficulty with completing the application. Officials in 10 of the 39 school districts responding to the survey that have received these funds said the application is difficult to complete in an open-ended survey question. DoDEA officials told us they are aware that the application can be difficult to complete, and one official was concerned that some districts that could benefit from the funds may not apply for them given the burden of the application. In response, DoDEA plans to issue more guidance in the form of frequently asked questions for the next application process in spring 2011. Officials plan to base this guidance on questions the department has received from applicants over the last several years. They also plan to develop a webinar to walk applicants through the application process for the next round. Additional counseling, use of technology, and flexibility on academic requirements were the strategies identified by most survey respondents that assist them in serving the unique needs of their military dependent students. In addition, school district officials we interviewed reported using a range of other related strategies, including providing literacy coaches, encouraging peer-to-peer support and other support groups, and reaching out to military installations for assistance (see fig. 7). However, because most school districts receiving DOD Impact Aid Supplemental funds deposit the funds in the district’s general fund and do not separately track their spending, we could not assess the extent to which any of these strategies were funded through DOD Impact Aid Supplemental funds rather than other funding sources. Some of the strategies school officials described are funded by other DOD programs or nonmilitary sources. Eighty percent of school districts in our survey reported using additional counseling as a key strategy to address the emotional needs of military dependent students, and many provided services such as deployment support groups and student peer support groups. One district hired a full- time psychologist to address the emotional and social needs of students due to both frequent school moves and recurring deployments of parents. Counseling and support often extend to other members of the family who are also struggling to cope with a deployed parent. For example, a home liaison in one district told us she holds training sessions on discipline with the at-home parent. Military parents we interviewed at one school district explained that sometimes the stigma associated with mental health services deterred military families from seeking help on base, raising the importance of supports at schools. Officials we interviewed at several school districts said they provided extra training for teachers and counselors on issues specific to military dependent students. In Texas, all counselors in one district received training in how to respond to needs of these students and their families in transitioning to a new area and how to help students cope with the loss of a parent. Officials in six of the seven school districts we interviewed told us they provided deployment support groups, typically led by school counselors, to provide military dependent students an opportunity to share feelings and solutions. Sixty-five percent of the schools in our survey offered peer- to-peer support programs. For example, “Student 2 Student” is a peer program promoted by the Military Child Education Coalition in which a team of volunteer students, supervised by a school counselor, teacher, or other school staff, assists both incoming and outgoing students to cope with or prepare for changes in academics and relationships. Further, 33 percent of survey respondents reported using military or deployment- focused bulletin boards to provide support for military dependent students. For example, one school we visited posted a “heroes wall,” which contained pictures and text the children created about their parent who was deployed. School district officials also highlighted the involvement of members of the military in supporting military dependent students. Sixty-one percent of districts responding to our survey said they involve members of nearby installations, and 64 percent reported taking advantage of counseling and other support offered by base representatives. For example, volunteers from one local installation provided one-on-one tutoring and military members attended physical education classes to help promote wellness and inspire the students to achieve a higher level of physical fitness. The use of technology, such as online grades, coursework, and attendance records, which is accessible to parents at home or deployed, was used by 80 percent of the school districts in our survey to help bridge the gap between students and deployed parents. For example, a Texas school district highlighted its use of an online resource that lets students take assessments aligned to state standards and directs them to individualized tutorials to improve skills. In addition, parents can monitor their child’s progress online at home or abroad. According to one school district official, families in his district have reported that this program has been a “blessing” in helping their children academically. Thirty percent of school districts in our survey reported streaming live graduation ceremonies. The principal of one school, which sends videotaped graduation ceremonies to deployed parents, said the video includes a special ceremony for these students and interviews with graduates and their families. Thirty percent of districts also reported in our survey providing Web-camera interactions with deployed parents. To address academic standards, which differ among districts, 74 percent of districts in our survey reported being flexible or taking an individualized approach to academic requirements. This may include being flexible on testing, course credits, or other requirements to meet the needs of incoming military dependent students. Districts in Virginia and Colorado made adjustments to requirements for courses and standardized testing based on requirements at the previously attended school and the point in the school year, for example, allowing seniors to use their previous school’s graduation requirements. Some schools hired extra teachers and staff to help facilitate the transition for students. One school district in Colorado created a position called an “integrationist” whose sole job was to ease the transition of the many transferring military dependent students by gathering academic, extracurricular, and personal information about them before they arrived to the district, then helping them get into the appropriate classes and extracurricular activities. Due to the constant influx of new military dependent students, an elementary school in Virginia hired extra reading support specialists to work individually with children who enter the school with poor reading skills. Seventy-two percent of school districts we surveyed reported using literacy coaches to assist military dependent students. Military parents we interviewed in Virginia noted that of everything the school did for military children, this extra and individualized academic support was the most appreciated. About half the districts in our survey highlighted their state’s participation in the Interstate Compact on Educational Opportunity for Military Children as an effective strategy to address some of the challenges related to mobility and academics. As of October 2010, 35 states had signed this agreement, which sets forth expectations for participating states to address key transition issues encountered by military families, including enrollment, placement, attendance, eligibility, and graduation. For example, the compact states that school districts will either waive specific courses required for graduation if similar course work has been satisfactorily completed in another district or will provide reasonable justification for denial. Officials we interviewed in all five states also mentioned their state’s participation in the compact as a strategy to assist with issues related to transition of military dependent students. DOD and Education have developed and implemented practices that facilitate their collaboration on efforts to assist military dependent students, their schools, and families. In our previous work, we have identified practices that help enhance and sustain interagency collaboration. These practices include articulating common objectives and resources, agreeing on compatible operating procedures and responsibilities, and reinforcing accountability through monitoring. The agencies have worked together, for example, to distribute guidance to schools on best practices for addressing military dependent students’ needs and to assist school districts located in areas experiencing influxes of military families. DOD and Education officials have a history of collaborating on education issues for children of military families through the Impact Aid programs and formalized and broadened these efforts with a memorandum of understanding (MOU) they signed in June 2008. The MOU identifies five focus areas for collaboration: 1. Quality education. Share educational best practices at schools serving military dependent students, and implement policies to support those with special needs. 2. Student transition and deployment. Encourage school district and state policies that minimize the impact of military dependent students’ frequent moves and parental deployments. 3. Data. Consider approaches for the collection, disaggregation, and analysis of education data on military dependent students. 4. Communication and outreach. Devise joint communication strategies to reach parents, educators, students, and military leaders about resources available from DOD and Education. 5. Resources. Support school districts affected by military growth through the DOD and Education Impact Aid programs, as well as other programs. To address these five areas, DOD and Education outlined 13 specific objectives in the MOU, including coordinating the DOD and Education Impact Aid programs. (See appendix III for a complete list of the objectives.) DOD and Education have carried out a number of collaborative activities within the five focus areas. For example, to address the area of resources, DOD and Education have collaborated to respond to the challenges from the 2005 military base closure and realignment actions that the BRAC Commission reported will result in 55 major closures and realignments by September 2011. These actions, once completed, would relocate large numbers of military families, which in turn will affect an increasing number of school districts. Officials from both agencies have made eight joint site visits, beginning in 2008, to high-growth military installations to better understand the specific education issues arising from mission changes and growth. The officials shared their findings with cognizant federal agencies, affected state and local governments and school districts, and made recommendations for how the districts can best prepare for influxes of military dependent students. These recommendations included improving coordination between districts and federal agencies to better estimate military dependent student growth in a district. DOD and Education are also collaborating on a study mandated in the National Defense Authorization Act for Fiscal Year 2010 that required DOD, in consultation with Education, to examine, among other things, the educational options available to military dependent children who attend schools in need of improvement as defined under ESEA. The study was also required to address the challenges military parents face in securing quality schooling for their children when the schools they attend are identified as needing improvement. To address student transitions and parental deployment, DOD and Education issued guidance to school districts about best practices to minimize the impact on military dependent students’ attendance records and academics when they are absent upon a parent’s return from deployment. Further, DOD, in cooperation with Education, published a book for military families and military and school leaders called “Students at the Center,” which provides information on resources and best practices for meeting the needs of military dependent children. DOD and Education have also taken steps to improve interagency communication and develop compatible operating procedures and responsibilities—key elements of effective collaboration identified in our prior work. An MOU working group meets monthly and is in the process of writing protocols for communication between the agencies. In addition, a military liaison position was established at Education in 2008 to serve as the primary contact between the agencies for coordinating program development, management, and outreach related to improving the academic condition of military dependent children. A senior DoDEA official said this new position has been beneficial because it provides a single point of contact. Education officials told us the working group’s efforts have increased communication with DOD and have led to a better understanding of the needs of children in families from all military branches. DOD officials also highlighted increased interest by Education officials to visit military installations. DOD officials said that prior to the MOU, they had working relationships only with officials from Education’s Impact Aid office; they now have relationships with officials in other offices in Education, such as its Office of Special Education and Rehabilitative Services and its Office of Elementary and Secondary Education. As a result, DOD officials have worked with representatives from those offices on several efforts. For example, according to a DOD official, Education officials provided technical support to DOD by reviewing school districts’ applications for the 2009 DoDEA grants, and the working group has hosted guest speakers from both Education and DOD. In addition, an official from Education’s Office of Safe and Drug- Free Schools spoke to the group about how its grant programs can assist military dependent students, and an official from DOD’s Office of the Deputy Under Secretary of Defense for Military Community and Family Policy spoke to the group about progress on the Interstate Compact. In May 2010, the White House announced a Presidential Study Directive on Military Family Policy, which requested that executive agencies develop a coordinated governmentwide approach to support and engage military families. According to senior Education officials, the directive has led Education to place an even greater priority on its collaborative efforts with DOD. The directive has provided another framework under which DOD and Education have worked together to improve the quality of education for military dependent children. Education developed a work plan that details initiatives the agency will undertake to address the goals of the directive. Specifically, senior Education officials have also visited military communities and schools to raise awareness of the challenges military dependent children face and the contributions their families make to the country. In addition, Education proposed that priority be given to its competitive grant proposals that could benefit military dependent students. The working group monitors its progress through a strategic plan developed in 2010 that aligns the MOU’s five focus areas for collaboration with initiatives the working group has accomplished or plans to carry out. Our prior work has found monitoring to be a key practice for effective interagency collaboration because it allows agencies to obtain feedback and improve effectiveness. DOD and Education officials told us the strategic plan helps them to examine and prioritize their areas of collaboration to plan for future efforts, and reflect on the extent to which they are meeting the original intent of the MOU. For example, to address the focus area of student transition and deployment, working group members outlined plans in their strategic plan for a resource guide about best practices for school attendance. As a result of their work, they contributed to a pamphlet, published by the Military Child Education Coalition in 2010, called “Military-Connected Students and Public School Attendance Policies” that is meant to assist school administrators, base commanders, and parents. Specifically, the pamphlet includes examples of districts around the country upholding their attendance policies while ensuring military dependents receive a quality education when absent from school. In addition, for transition and deployment, working group members plan to look at installations with the highest deployment rates to explore options to mitigate the effects of daily attendance requirements for military dependent students affected by deployments. Support for military families, including the education of military dependents, has received even greater attention with the May 2010 announcement of the Presidential Study Directive on Military Family Policy. In response, DOD and Education further increased their collaboration to provide a quality education and support to military dependent children through a variety of activities in addition to DOD Impact Aid. Programs such as DOD Impact Aid provide funding to assist school districts with a significant percentage of military dependents, but the outcomes and effectiveness of their activities are difficult to assess. This is due in part to the structure of the DOD Impact Aid program, which does not require any reporting on the use of the funds. Further, DOD, Education, states, and other parties concerned about the education of military dependents lack appropriate data to monitor the progress of military dependent students and the effectiveness of the schools and programs serving them. Currently, school districts and states are not required to collect academic achievement data for military dependent students, as they are for certain other groups of students, including economically disadvantaged students and students with disabilities. Without these data, stakeholders lack critical information that could help them better understand the specific needs of these students and their educational outcomes over time. To better understand the needs of military students and the effectiveness of strategies to assist them, we recommend the Secretary of Education, in collaboration with the Secretary of Defense, determine whether to require school districts to identify military dependent students as a distinct subgroup for reporting on their academic outcomes, such as test scores and high school graduation rates. This should include determining whether the Department of Education needs to obtain any additional legislative authority for this requirement, and seeking it from Congress, if necessary. We provided a draft of the report to the Departments of Education and Defense for review and comment. Education agreed with our recommendation and stated that the agency proposed improving the collection of data on military dependent students in the upcoming reauthorization of ESEA. This proposal is discussed in the Administration’s January 2011 report, Strengthening Our Military Families: Meeting America’s Commitment. According to Education, under the Administration’s proposal, states and school districts that receive funds under ESEA Title I, Part A would be required to report state- , district-, and school-level aggregate data on the academic achievement of military dependent students. DoDEA provided oral concurrence with our recommendation. Education and DOD both provided technical comments, which have been incorporated in the report as appropriate. Education’s comments are reproduced in appendix IV. We are sending copies of this report to the appropriate congressional committees, the Secretary of Education, the Secretary of Defense, and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our review focused on (1) what is known about the use and effectiveness of Department of Defense (DOD) Impact Aid funds, (2) the challenges faced by school districts in serving military dependent students and strategies they have in place to address these challenges, and (3) how DOD and the Department of Education (Education) have coordinated their assistance to districts. We designed and implemented a Web-based survey to gather information on the use and effectiveness of DOD Impact Aid funds and the challenges faced by school districts in serving military dependent students. The survey also included questions regarding DOD Impact Aid for Children with Severe Disabilities and DOD Impact Aid for Base Realignment and Closure. We sent this survey to the 154 school districts that have received DOD Impact Aid Supplemental funds in any year from 2001 to 2009, the years covered in the mandate. We obtained the list of DOD Impact Aid recipients from Education and verified the recipients with a list provided by DOD Education Activity (DoDEA). Our survey was directed to the school district official identified as the point of contact for DOD Impact Aid by DoDEA officials. Most of these school district officials were superintendents, assistant superintendents, directors of business or finance, or other business office employees. To assess the feasibility of conducting a survey for this report, we contacted several school districts to determine whether they would be able to respond to questions regarding their spending of DOD Impact Aid funds. All districts that we spoke with told us they would be able and willing to respond to such a survey. We obtained available data from both DOD and Education on the school districts that received DOD Impact Aid Supplemental funds in any year from 2001 through 2009, as well as a contact person for each district. Drawing from the provisions in the mandate, information obtained during site visits to school districts, and preliminary interviews with DOD, Education, and two nonprofit organizations—the Military Impacted Schools Association and the Military Child Education Coalition—we developed survey questions. We also sought input on our final draft from the two nonprofit organizations, as well as internal GAO stakeholders and a survey specialist before conducting pretests. We pretested our survey draft with school district officials at four districts that received DOD Impact Aid funding in any year from 2001 to 2009 to help ensure that the questions were clear, the terms used were precise, the questions were unbiased, and the questionnaire could be completed in a reasonable amount of time. We modified the survey to incorporate the feedback from each pretest. The survey contained questions on: (1) general school district information, (2) spending tracking, and disbursement of DOD Impact Aid funds, (3) perceptions of effectiveness of DOD Impact Aid funding sources, and (4) challenges faced by districts with respect to military dependent students and strategies to address those challenges. The survey also contained questions on DOD’s monitoring of funds, a specific provision in the mandate regarding the conversion of military housing to private housing (see app. II), and DOD and Education technical assistance or guidance to school districts. We conducted the survey by using a Web-based self-administered questionnaire. In the questionnaire, we asked the school district officials to be the lead survey respondent and to consult with others in the district who may be more knowledgeable on questions related to challenges associated with educating military dependent students. We collected contact information for these school district officials from DoDEA and through searches of these districts’ Web sites. We verified the contact information by sending notification e-mails and calling districts for the correct contact information in cases where the e-mail was undeliverable. We sent the survey activation e-mail to these officials on July 28, 2010, and then asked them to complete the survey within 3 weeks. To encourage them to respond, we sent three follow-up e-mails over a period of about 4 weeks and extended our survey deadline to September 13, 2010. Staff made phone calls over the next 2 weeks to encourage those who did not respond to complete our questionnaire. We closed our survey on September 24, 2010, and 118 school districts completed the survey for a response rate of 77 percent. The practical difficulties of conducting any survey may also introduce errors commonly referred to as nonsampling errors. For example, difficulties in the way a particular question is interpreted, the sources of information that are available to respondents, or the way the data were analyzed can introduce unwanted variability into the survey results. We took steps in the development of this questionnaire, in the data collection, and in the data analysis to minimize such errors. Specifically, a survey specialist designed the questionnaire in collaboration with two staff members who were familiar with the subject matter. Then, as previously mentioned, the draft questionnaire was pretested with four school districts to ensure that questions were relevant, clearly stated, and easy to comprehend. The questionnaire was also reviewed by officials from two military education advocacy organizations. Data analysis was conducted by a data analyst working directly with the staff who developed the survey. When the data were analyzed, a second independent data analyst checked all computer programs for accuracy. Since this was a Web-based survey, respondents entered their answers directly into the electronic questionnaires. This eliminated the need to have the data keyed into databases, thus removing an additional source of error. To identify the challenges school districts face in educating military dependent students and the strategies they have implemented, we conducted site visits to four districts in two states (Colorado and Virginia) and phone calls with three districts in three states (California, Missouri, and Texas). We chose these districts based on recommendations from DOD, the Military Impacted Schools Association, and the Military Child Education Coalition. We strove to achieve diversity in geographic location, school district size, and percent of district made up of military dependents from different branches of military service. (See table 3 below for more information on the districts we interviewed.) The findings from these five states and seven districts cannot be projected nationwide, but we believe they illustrate valuable perspectives on the challenges of serving military dependent students, and assistance from DOD and other sources to help address the challenges. During the visits we interviewed superintendents, assistant superintendents, budget office officials, guidance counselors, and, in some locations, military school liaisons, teachers, and students. In one school district, we also met with a group of parents. We also toured schools and obtained documents. Interviewees provided information on the unique challenges faced by military students and families and the strategies schools employ to respond to those challenges from their varying perspectives. We conducted a review of the literature on military dependent student challenges and the strategies schools employ to respond to these challenges. We searched for literature using appropriate search terms such as “military dependent education” and “public school” in a variety of research databases. A social scientist assisted us in assessing the reliability and validity of these studies for our purposes. In the report, we present some examples from the literature to illustrate our findings. In addition, we reviewed prior GAO reports on elementary and secondary education, military restructuring, and practices that can help to enhance collaboration. To review DOD and Education’s efforts to implement DOD Impact Aid and to collaborate to serve military dependent students, we interviewed appropriate officials at DoDEA, and in offices at Education, which included the Office of Impact Aid; the Office of Innovation and Improvement; the Office of Planning, Evaluation, and Policy Development; and the Office of the Secretary, as well as representatives from the Military Impacted Schools Association and the Military Child Education Coalition, two organizations focused on military dependent education. We reviewed relevant federal laws and regulations. We also reviewed agency documentation, such as the memorandum of understanding (MOU) between DOD and Education, their strategic plan for implementing the MOU, and budget documentation for the DOD Impact Aid program and other DOD programs. The National Defense Authorization Act for Fiscal Year 2010 mandated us to examine 17 separate provisions in various Defense Authorization Acts from fiscal years 2001 to 2009. We addressed all but three of the provisions in the main body of the report. Here we provide our findings on the remaining three provisions of the mandate. Grant program for repair, renovations, and maintenance. The 2001 Defense Authorization Act authorized a grant program for repair, renovations, and maintenance of certain school facilities. Funding was to come from appropriations made for “Quality of Life Enhancements, Defense-Wide.” In fiscal year 2001, $10.5 million was authorized and appropriated for that appropriations category. DOD allocated these funds, but could not provide more details about the use of these funds. Continuing Impact Aid after deployment or death of a parent or guardian. This special rule was enacted to cover school years 2004–2005 and 2005–2006 so that Impact Aid would not be reduced in those districts where a local educational agency would normally lose funding as a result of the deployment or death of a parent or legal guardian on active duty. Children who resided on federal property and whose parents or legal guardians were deployed or died during that period were still counted for funding purposes. School district officials told us they have had no difficulties counting students whose parents or guardians had been deployed or who had died. An official from the Military Impacted Schools Association explained that this rule adequately addressed any problems experienced in the past. Extending eligibility for Impact Aid where military housing is converted to private housing. This provision, enacted in fiscal year 2003, extends eligibility for a limited period of time to heavily impacted school districts that received a basic support payment in the prior fiscal year, but would subsequently be deemed ineligible as a result of the conversion of military to private housing. The provision extends eligibility during the period of conversion. School districts we interviewed and an official from the Military Impacted Schools Association did not mention any issues with regard to this provision. Education and DOD’s MOU identified 13 objectives to guide their collaborative efforts. 1. Promote and enhance policies that will improve military children’s education and overall well-being. 2. Advance the quality of educational opportunities for all military children. 3. Provide research-based academic, social-emotional and behavioral supports to facilitate seamless transitions for military children. 4. Provide leadership and advocacy programs to help military students cope with issues surrounding deployments. 5. Support foreign language education, including programs for strategic languages. 6. Assist military parents to be informed advocates of quality education choices. 7. Explore legislative options to address transition issues for military children. 8. Extend opportunities for student learning through support of online or virtual and other research-based models. 9. Provide research-based teacher and administrator professional development programs. 10. Forge effective partnerships with schools and districts. 11. Coordinate the DOD and Education Impact Aid programs. 12. Communicate with military families and organizations to show appreciation for their contributions. 13. Increase awareness of resources and tools available from Education and DOD. Individuals making key contributions to this report include: Beth Sirois (Assistant Director), Kate Blumenreich (Analyst-in-Charge), Griffin Glatt- Dowd, and Karen Febey. Blake Ainsworth, Susan Aschoff, Cornelia Ashby, James Bennett, Michele Fejfar, Cathy Hurley, Julian Klazkin, Sheila McCoy, Kelly Rubin, and Kim Siegal also provided valuable assistance.
Since the early 1990s, Congress has supplemented the Department of Education's (Education) Impact Aid program by providing funds for the Department of Defense's (DOD) Impact Aid program to compensate school districts with a high number of military dependent students. The National Defense Authorization Act for Fiscal Year 2010 required GAO to review the use of these funds. GAO reviewed (1) what is known about the utilization and effectiveness of DOD Impact Aid funds, (2) the challenges faced by school districts in serving military dependent students, and (3) how DOD and Education have collaborated on their assistance. To address these issues, GAO conducted a Web-based survey of all 154 school districts that received DOD Impact Aid in any year from 2001 to 2009, with a response rate of 77 percent. GAO also interviewed officials from DOD and Education and seven school districts in five states, ranging in school district size, location, and percentage of military dependent students. The findings from these visits cannot be projected nationwide, but illustrate valuable perspectives. DOD Impact Aid has three distinct funding components, with more than three quarters of the funds provided through the DOD Impact Aid Supplemental program. Eighty five percent of the 87 responding school districts that received funds for the 2009-2010 school year reported placing these funds into their general fund to use for overall maintenance and operations. Because there are no reporting requirements on districts' use of the funding, it is difficult to assess how the funds are used and to what extent military dependent students benefit. Further, there are no data available on these students that could be used to assess their academic achievement or educational outcomes, or determine where funding needs are greatest. Such reporting requirements exist for certain other groups of students, such as economically disadvantaged students and students with disabilities. Federal agency officials acknowledged this need for information, and Education has begun discussing how to address this need. School districts GAO contacted reported that issues related to the mobility of military dependent students and serving students with special needs were among the greatest challenges they faced in serving these students. Mobility increased academic needs due to differences in state and district curricula and behavioral and emotional issues in the classroom. To address challenges in serving military dependent students, school districts reported adopting a range of strategies, including additional counseling for students with a deployed parent and flexibility on academic requirements for newly transferred students. Guided by a memorandum of understanding signed in 2008, DOD and Education have implemented practices that facilitate their collaboration to assist military dependent students, according to practices GAO has identified that enhance collaboration. For example, beginning in 2008, the departments completed eight joint site visits to high-growth military installations, which helped them advise school districts on preparation for an influx of military dependent students. To monitor these collaborative efforts, DOD and Education have developed a strategic plan that tracks their progress. GAO recommends that the Secretary of Education determine whether to require school districts to report data on the academic outcomes of military dependent students, and if so, to determine the need for any additional legislative authority. Education agreed with GAO's recommendation, and DOD provided oral concurrence.
When we placed strategic human capital management on our high-risk list back in January 2001, as a governmentwide high-risk challenge, we noted that after a decade of government downsizing and curtailed investments of human capital, it had become increasingly clear that federal human capital strategies were not appropriately constituted to adequately meet the current and emerging needs of the government and its citizens. We provided many examples of where human capital shortfalls were eroding the ability of agencies—and threatening the ability of other agencies—to effectively, efficiently, and economically perform their missions. In short, strategic human capital management was a pervasive challenge across the federal government. We noted that while legislation and other actions have been put in place since 1990 to address most major management areas, human capital was the critical missing link in reforming and modernizing the federal government’s management practices. Our high-risk report pointed to actions that federal leaders and their agencies, the Office of Personnel Management (OPM), the Office of Management and Budget (OMB), and Congress needed to take to address high-risk human capital issues. Since then, a real and growing momentum for change has become evident. In August 2001, President Bush placed the strategic management of human capital at the top of the administration’s management agenda. In October 2001, OMB notified agencies that they would be assessed against standards for success for each part of the President’s Management Agenda (PMA), including the strategic management of human capital. The first agency assessment was made public in February 2002 as part of the President’s proposed fiscal year 2003 budget. Subsequent assessments were later released in June and September 2002 and in January 2003, reporting on both the status and progress of agency efforts. In December 2001, OPM released a human capital scorecard to assist agencies in responding to the human capital standards for success in the PMA. In March 2002, we released A Model of Strategic Human Capital Management, designed to help agency leaders determine how well they integrate human capital considerations into daily decision making and planning for the program results they seek to achieve. In April 2002, the Commercial Activities Panel, which I was honored to chair, sought to elevate attention to human capital considerations in making sourcing decisions. In October 2002, OMB and OPM approved revised standards for success in the human capital area of the PMA, reflecting language that was developed in collaboration with GAO. To assist agencies in responding to the revised PMA standards, OPM released the Human Capital Assessment and Accountability Framework. In the fall of 2002, OPM began realigning its organizational structure and appointed four new associate directors with proven human capital expertise to lead federal efforts as part of a larger OPM effort to be more customer-focused. In November 2002, Congress passed the Homeland Security Act of 2002, which created the Department of Homeland Security (DHS) and provided the department with significant flexibilities to design a modern human capital management system. The effective development and implementation of these flexibilities will prove essential to the performance and accountability of DHS, as well as provide a potential model for Congress to consider for wider application governmentwide. The Homeland Security Act of 2002 also included additional significant provisions relating to governmentwide human capital management, such as direct hire authority, the ability to use categorical ranking in the hiring of applicants instead of the “rule of three,” the creation of chief human capital officer (CHCO) positions and a CHCO Council, an expanded voluntary early retirement and “buy-out” authority, a requirement to discuss human capital approaches in Government Performance and Results Act plans and reports, and a provision allowing executives to receive their total performance bonus in the year in which it is awarded. Congress has further underscored the consequences of human capital weaknesses in federal agencies and pinpointed potential solutions through its oversight process and a range of hearings. Despite the building momentum for comprehensive and systematic reforms, it remains clear that today’s federal human capital strategies are not yet appropriately constituted to meet current and emerging challenges or to drive the needed transformation across the federal government. The basic problem is the long-standing lack of a consistent strategic approach to marshaling, managing, and maintaining the human capital needed to maximize government performance and assure its accountability. Specifically, as detailed in our January 2003 high-risk volume on human capital, agencies continue to face challenges in four overarching areas: Leadership: Top leadership in agencies must provide the committed and inspired attention needed to address human capital and related organization transformation issues. Strategic human capital planning: Agencies’ human capital planning efforts need to be more fully and demonstrably integrated with mission and critical program goals. Acquiring, developing, and retaining talent: Additional efforts are needed to improve recruiting, hiring, professional development, and retention strategies to ensure that agencies have the needed talent. Results-oriented organizational cultures: Agencies continue to lack organizational cultures that promote high performance and accountability and empower and include employees in setting and accomplishing programmatic goals. Committed and sustained leadership and persistent attention on behalf of all interested parties will continue to be essential to building on the progress that has been and is being made, if lasting reforms are to be successfully implemented. First and foremost, individual federal agencies need to more consistently adopt a strategic approach to the use of their people. This requires persistent leadership and a long-term commitment; aligning human capital approaches with the accomplishment of agency goals; implementing recruiting, hiring, training, professional development, performance reward, and retention approaches that foster mission accomplishment; and instilling a results-oriented organizational culture. Agencies’ CHCOs will need to play a particularly important role in this regard. The careful and strategic selection of these officials is therefore critical. The CHCO is not fundamentally an “HR” or personnel administration position, although knowledge in those areas is important. Rather, agency CHCOs should have the ability, experience, vision, attributes, and credibility needed to successfully integrate human capital considerations with program goals and to play a major leadership role in driving agency transformation efforts. Agencies also must make effective use of the tools and flexibilities that Congress has provided. To assist agencies in this regard, and at the request of Chairman Voinovich, Ranking Minority Member Durbin, and other Members of Congress, we issued a report last December detailing the practices that agencies need to employ to effectively use human capital flexibilities. These practices are shown in figure 1. The central management agencies—OPM and OMB—also have continuing vital roles to play. As the agency responsible for leading human capital management governmentwide, OPM plays a central role in helping agencies tackle the broad range of human capital challenges that are at the root of transforming what agencies do, how they do it, and with whom they partner. As detailed in our Performance and Accountability Series volume on OPM, our work and the work of others continues to show that agencies need and want greater leadership from OPM in helping them to address their human capital challenges, especially in identifying new human capital flexibilities, removing obstacles from the federal hiring process, and assisting agency workforce planning efforts. Opportunities exist for OPM to be more vigorous in responding to a number of critical program challenges, such as applicant examination, staffing, and compensation approaches. In addition, OPM shares responsibility with agencies for ensuring that human capital practices are carried out in accordance with merit system principles and other national goals. Effective and strategic oversight of agencies’ systems is even more critical today because an increasing number of agencies are seeking and obtaining exemptions from traditional civil service rules at the same time that human capital staffs responsible for overseeing these activities have dwindled. In response to these ongoing challenges, OPM has taken a number of important actions. First, OPM realigned its organizational structure and workforce to create a new, flexible structure that seeks to “de-stovepipe” the agency; enable it to be more responsive to its primary customers, federal departments and agencies; and focus on the agency’s core mission. In November 2002, OPM’s Director appointed four new associate directors with proven human capital expertise to lead the organization. OPM also has the key role in leading the administration’s efforts to address strategic human capital management, a critical part of the PMA. OPM also published two reports in 2001 to increase agencies’ awareness of available human capital flexibilities, and released a report on federal compensation practices in April 2002. A major initiative begun in the spring of 2002 is designed to improve the hiring process. Furthermore, OPM is addressing its oversight challenge in part by encouraging agencies to develop and maintain internal accountability systems in line with its HRM Accountability Standards. OPM recently released the results of its 2002 Federal Human Capital Survey. This survey is providing a wealth of important information on the views and attitudes of federal employees. The results demonstrate the importance of routinely surveying employees across the federal government through the Federal Human Capital Survey or a similar survey. Consideration should be given to exploring ways to assure that these surveys will be conducted on a periodic basis. Finally, OPM is at the center of the DHS’s efforts to create a modern personnel system that serves the needs of the department and could serve as a potential model for others. The designation of human capital as the first item on the PMA and the supporting standards for success have raised the profile of human capital issues on OMB’s agenda. As OMB and the agencies learn to evaluate themselves against the standards and implement policies to make improvements, OMB will need to ensure that the standards are consistently and appropriately applied while assessing agencies’ progress in managing their human capital. Perhaps most important, OMB support will be needed as agencies identify targeted investment opportunities to address human capital shortfalls. Congress has had and will need to continue to have a central role in improving agencies’ human capital approaches. Traditionally, Congress has been an institutional champion in improving management of executive agencies across the government. Support and pressure from Congress has been indispensable to instituting and sustaining management reforms at specific agencies. Its confirmation, oversight, appropriations, and legislative responsibilities provide Congress with continuing opportunities to ensure that agencies recognize their responsibilities to manage people for results. For example, as Chairman Voinovich has often stressed, the Senate has the opportunity during the confirmation process to articulate its commitment to sound federal management by exploring how prospective nominees plan to make a link between mission accomplishment and human capital policies. As part of the oversight and appropriations process, Congress can continue to examine whether agencies are managing their human capital to improve programmatic effectiveness and to encourage agencies to use the range of appropriate flexibilities available under current law. Congress will also play a critical role in determining the nature and scope of any additional human capital flexibilities that will be made available to agencies, while assuring that adequate safeguards are incorporated to prevent abuse. Congress also has the responsibility to ensure the reasonableness and adequacy of financial resources that are made available to agencies. Congress is currently considering several pieces of legislation to help agencies address their current and emerging human capital challenges. I believe that the basic principles underlying these legislative proposals have merit and collectively they would make a positive contribution to addressing high-risk human capital issues and advancing the needed cultural transformation across the federal government. I also believe that certain additional safeguards and provisions should be considered by Congress. We look forward to working with the subcommittees as you consider these and related legislative initiatives. Today, I will provide observations on selected provisions of the various proposals. The Senior Executive Service Reform Act of 2003 The proposed Senior Executive Service Reform Act of 2003 includes a number of important reforms. For example, the legislation would move to a single Senior Executive Service (SES) pay range, increase the pay cap, and link SES pay more closely to performance. I strongly believe that these are worthwhile reforms that must be considered together, as they are in this proposed legislation. The legislation seeks to link pay and performance of senior executives by replacing the current system of six grades with a single pay band. Agencies would have flexibility to set basic pay for SES members at any amount within the range plus locality pay, to a total annual salary that may not exceed level II of the Executive Schedule. In addition, agencies could employ a broadbanding approach to SES pay should they so desire. This important change would provide agencies with needed flexibility to set SES pay in a way that reflects the reality of the great diversity in the work that members of the SES do rather than using a set of rigid SES pay grades. In fact, I have the authority to adopt such an approach in setting the pay for the SES in the GAO, and we plan to do so. The legislation would raise the highest basic pay rate for an SES member from the current maximum of $134,000 (level IV of the Executive Schedule) to $142,500 (level III of the Executive Schedule). SES basic pay currently ranges from $116,500 to $134,000, before locality pay is included. The problems of SES pay compression are real and must be addressed, with over 60 percent of SES members being at the current cap. The SES needs to lead the way in the federal government’s effort to better link pay to performance. The legislation would require that agencies base their SES pay decisions on “individual performance, contribution to the agency’s performance, or both.” We have reported that there are significant opportunities to strengthen efforts to hold senior executives accountable for results. In particular, more progress is needed in explicitly linking senior executive expectations for performance to results-oriented organizational goals, fostering the necessary collaboration both within and across organizational boundaries to achieve results, and demonstrating a commitment to lead and facilitate change. These expectations for senior executives will be critical to keep agencies focused on transforming their cultures to be more results-oriented, less hierarchical, more integrated, and externally focused and thereby be better positioned to respond to emerging internal and external challenges, improve their performance, and assure their accountability. Agencies should be required to have modern, effective, credible, and validated performance management systems in place before they are granted authority to better link pay to performance for broad-based employee groups. In this regard, Congress should consider providing specific statutory standards that agencies’ performance management systems would be required to meet before OPM could approve any such pay for performance effort. Our own experience in implementing such reforms in GAO and the practices of other leading organizations that I will discuss shortly could serve as a starting point for that consideration. Finally, the legislation’s provision to allow agencies to credit nonfederal work experience for purposes of providing annual leave recognizes that the federal government must effectively recruit in a larger labor market. The increasing number of retirement-eligible federal employees is most concentrated in mid- and senior- level positions. To attract top talent, both at the entry and at midcareer levels, it is important to offer applicants an attractive compensation and benefits package that is not structured entirely on a model that assumes a 30-year career of federal service. Simply stated, this provision recognizes the reality of increased mobility in the workforce and the need to modernize our annual leave provisions to attract and retain experienced people with critical skills. The Federal Workforce Flexibility Act of 2003 The Federal Workforce Flexibility Act of 2003 would expand the authority to use and increase the amount of recruitment and retention bonuses. For example, the legislation would allow the payment of a recruitment bonus of up to 100 percent of an employee’s annual salary for critical, hard-to-fill positions, subject to approval by the agency. The legislation also expands the use of recruitment bonuses to employees currently employed in another federal agency and retention bonuses to employees who might leave to go to another federal agency. Previously, recruitment bonuses could only be paid to employees coming from outside the federal government and retention bonuses could only be paid to employees likely to leave federal employment altogether. We support providing agencies with these types of additional tools and flexibilities to attract and retain needed staff as long as such payments are targeted, based on a business need, and are implemented with adequate safeguards. In that regard, Congress should consider capping the number or percentage of employees in an agency who would be eligible for such payments. As you know, the federal government faces a looming wave of employees who will be eligible for retirement. Agencies need succession planning programs to ensure that knowledge is transferred from one generation of employees to another. An approach that should be explored would be to allow “phased retirements.” There are a number of ways that a phased retirement program could work; the legislation seeks to provide one option for employees who would like to work part time as they end their federal careers by prorating retirement annuities for the period of service that was performed on a part-time basis, thus removing a current disincentive to such part-time work. The Federal Workforce Flexibility Act of 2003 would also expand the authority to conduct personnel demonstration projects. Such projects, authorized by OPM under the Civil Service Reform Act of 1978, provide a means for testing and introducing improvements in governmentwide human resources management systems. To become a demonstration project, a federal agency obtains authority from OPM to waive existing federal human resources management laws and regulations in Title 5 and propose, develop, test, and evaluate interventions for its own human resources management system that can help shape the future of federal human resource management. As a general rule, current law limits the size of a demonstration project to 5,000 employees and the life of a project to a 5-year time limit. The legislation would eliminate the cap on the number of employees who could participate in a demonstration project and allow the projects to have up to a 10-year life span. This more flexible approach to demonstration projects is consistent with the approach Congress took in 1996 in authorizing the Department of Defense civilian acquisition workforce demonstration project to expand the number of personnel eligible to participate from the statutory cap of 5,000 to a maximum of 95,000 and extend the project's length from a 5-year time limit to 13 years. Demonstration projects’ testing, evaluation, and reporting requirements have provided invaluable lessons learned to other federal organizations. Much of the federal government’s knowledge and real-world experience with performance-based pay reform has been obtained through demonstration projects. In fact, of the 17 demonstration projects that have been implemented over the past 25 years, 12 have tested some form of linking compensation to performance. In addition, a demonstration project done at the Department of Agriculture provided an important test of using categorical ranking as part of the applicant selection process and was therefore useful to the Congress in deciding to expand such authorities governmentwide as part of the Homeland Security Act of 2002. The Federal Workforce Flexibility Act of 2003’s reforms to enhance agencies’ training and career development programs are also positive steps that should help improve human capital management. The legislation calls for agencies to evaluate their training programs and plans to ensure that they are linked to strategic and performance goals and contribute to achieving the agency’s mission. Such evaluations of training and development efforts are important in demonstrating how these efforts help develop employees and improve the agency’s performance. As part of a balanced approach, training and development evaluations should consider organizational results and feedback from customers and employees. The strategic evaluation requirement in this legislation should help move agencies away from an orientation on activities or processes (such as the number of participants, courses offered, and hours of training provided), and instead use information on how training and development efforts (1) contribute to improved performance, (2) strengthen capacity to meet new and emerging challenges, and (3) reduce the cost of poor performance. The legislation focuses agencies on several specific areas of importance, including developing succession programs and informing managers about effective strategies to address performance problems, mentor employees, and improve performance and productivity. We have noted that linking an executive development program and comprehensive succession planning to agency goals and objectives can help foster a committed leadership team. Further, calling for agencies to identify and share effective human capital strategies can help improve individual and organizational performance and further efforts to transform the cultures of government agencies. At Chairman Voinovich’s request, this fall we will report on selected agencies’ efforts to design effective training and development programs. Generating Opportunity by Forgiving Educational Debt Service Act of 2003 Congress previously passed legislation that allows agencies to set up programs to repay the student loans of federal employees in order to attract or keep highly qualified individuals. Several agencies, including GAO, have begun such programs and have found them to be valuable in attracting and retaining high-quality talent. These payments are currently included in gross income for federal tax purposes. However, the Generating Opportunity by Forgiving Educational Debt Service Act of 2003 (GOFEDS) would make these payments nontaxable. GOFEDS would therefore make payments by the federal government generally comparable to loan forgiveness programs in use by some educational institutions and nonprofit organizations. We believe that this provision has great merit. It would help to further leverage existing student loan repayment program dollars and would help agencies in their efforts to attract and retain top talent. Obviously, Congress will need to balance the federal human capital benefits of this provision as a tax expenditure with overall federal tax policy. Moreover, Congress should consider how GOFEDS could be implemented in such a way that the tax forgiveness provisions do not obscure the true costs of agency operations. The Presidential Appointments Improvement Act of 2003 The Presidential Appointments Improvement Act of 2003 would, among other things, require each executive agency to identify the number of presidentially appointed, Senate-confirmed positions and the layers of those positions. Related to this provision, last September I convened a roundtable to discuss the Chief Operating Officer (COO) concept and how it might apply within selected federal departments and agencies as one strategy to address certain systemic federal governance and management challenges. There was considerable discussion on whether the senior management official in an agency should be presidentially appointed, requiring Senate confirmation, while Senate confirmation would not be required for those officials who lead specific management functions (e.g., financial management, information technology, human capital) and who report to that senior management official. While there was interest in considering such an arrangement, it was also acknowledged that it would likely require amending existing legislation, for example the Chief Financial Officers Act, and, therefore, would need careful analysis to ensure that any legislative changes result in augmented attention to management issues and do not inadvertently lead to a reduction in the authority of key management officials and/or the prominence afforded a particular management function. An additional suggestion made at the roundtable that Congress may wish to consider would be to allow senior management officials in each agency to assume full authorities and responsibilities up to or for a specified period of time once they were formally nominated but before their confirmation. However, it was widely recognized that such an approach would be viable only if the senior management position was restricted to the professional and nonpartisan “good government” responsibilities that are fundamental to effectively executing any administration’s program agenda and did not entail program policy-setting authority. Furthermore, should Congress decide to adopt the COO concept noted above and not make subject certain management officials to the confirmation process (e.g., the Chief Financial Officer and the Chief Information Officer), the need for this flexibility would be greatly reduced. More generally, the roundtable’s overall purpose was to discuss the COO concept and how it might apply within selected federal departments and agencies. The roundtable discussion neither sought nor achieved a consensus on the COO concept. However, it does appear that there was general agreement on a number of important overall themes that can serve as a basis for subsequent analysis, discussion, and consideration. These generally agreed-upon themes provide a course for action. Elevate attention on management issues and transformational change. The nature and scope of the changes needed in many agencies require the sustained and inspired commitment of the top political and career leadership. There is no substitute for top leadership involvement, including that of the President, through for example, the establishment of a governmentwide management agenda. Top leadership attention is essential to overcome organizations’ natural resistance to change, marshal the resources needed to implement change, and build and maintain the organizationwide commitment to new ways of doing business. Integrate various key management and transformation efforts. By their very nature, the problems and challenges facing agencies are crosscutting and thus require coordinated and integrated solutions. However, the federal government too often places management responsibilities (for example, information technology, human capital, or financial management) into various “stovepipes” and fails to implement transformational change management initiatives in a comprehensive, ongoing, and integrated manner. While officials with management responsibilities often have successfully worked together, there needs to be a single point within agencies with the perspective and responsibility—as well as authority—to ensure the successful implementation of functional management and, if appropriate, transformational change efforts. At the same time, it is not practical to expect that the deputy secretaries, given the competing demands on their time in helping the secretaries execute the President’s policy and program agendas, will be able to consistently undertake this vital integrating responsibility. Moreover, while many deputy secretaries may be nominated based in part on their managerial experience, it has not always been the case and, not surprisingly, the management skills, expertise, and interests of the deputy secretaries have always varied and will continue to vary. Institutionalize accountability for addressing management issues and leading transformational change. The management weaknesses in some agencies are deeply entrenched and long-standing and will take years of sustained attention and continuity to resolve. In addition, making fundamental changes in agencies’ cultures will require a long-term effort. The experiences of successful major change management initiatives in large private and public sector organizations suggest that it can often take at least 5 to 7 years until such initiatives are fully implemented and the related cultures are transformed in a sustainable manner. In the federal government, the frequent turnover of the political leadership has often made it difficult to obtain the sustained and inspired attention required to make needed changes. Looking forward, Congress should consider making comprehensive legislative reforms to existing civil service laws, taking into account the extent to which traditional approaches make sense in the current and future operating environments. In that regard, there is a growing understanding that we need to fundamentally rethink our approach to federal pay and develop an approach that places a greater emphasis on a person’s knowledge, skills, position, and performance rather than the passage of time, the rate of inflation, and geographic location. The OPM Director’s White Paper on modernizing federal pay, issued last April, amply demonstrated that the current federal pay system was designed for the heavily clerical and low graded workforce of the 1950s rather than today’s knowledge-based government. Similarly, the National Commission on the Public Service, chaired by Paul Volcker, observed that agencies need greater freedom to connect pay both to the market and to performance. In short, as the nature of the federal workforce has changed, so too must our pay system if we are to effectively compete for top talent and create incentives for both individual and institutional success. Under the current federal pay system, the overwhelming majority of each year’s increase in federal employee pay is largely unrelated to an employee’s knowledge, skills, position, or performance. In fact, over 80 percent of the cost associated with the annual increases in federal salaries is due to longevity and the annual pay increase. One approach that has been tested and that I believe deserves wider consideration is to reserve the annual pay adjustment for only those employees who receive an acceptable performance rating. This would send a clear message to the overwhelming majority of federal employees that their contributions are valued, and those few who are not contributing will not be rewarded for their lack of effort. More generally, current federal pay gaps vary by the nature of the person’s position and yet the current method for addressing the pay gap assumes that it is the same throughout government. We must move beyond this outdated, “one-size-fits-all” approach to paying federal employees and seriously explore more market- and performance- based approaches to federal pay. As part of this exploration, we need to continue to experiment with providing agencies with the flexibility to pilot alternative approaches to setting pay and linking pay to performance. The greater use of “broadbanding” is one of the options that should be considered as part of a broader discussion of pay reform. In the short term, Congress should explore the benefits of (1) providing OPM with additional flexibility that would enable it to grant governmentwide authority for all agencies (i.e., class exemptions) to use broadbanding for certain critical occupations and/or (2) allowing agencies to apply to OPM (i.e., case exemptions) for broadbanding authority for their specific critical occupations. However, agencies should be required to demonstrate to OPM’s satisfaction that they have modern, effective, credible, and validated performance management systems before being able to adopt broader pay for performance systems for non-SES personnel. In this regard, Congress should consider providing specific statutory standards that agencies must meet before OPM would be able to grant an exemption from existing Title 5 requirements. As with all pay for performance efforts, adequate safeguards, including reasonable transparency and appropriate accountability mechanisms, would need to be in place to ensure fairness, prevent politicalization, and prevent abuse. Such safeguards would include ensuring that an agency’s career leadership and managers have significant roles in performance- related pay decisions and that employees have central roles in the design and implementation of the system to build their sense of ownership for the system. In our work looking at leading performance management efforts here and abroad, we have found that the involvement of employees is critical to the success of such initiatives. Leading organizations consulted a wide range of stakeholders early in the process, obtained feedback directly from employees, and engaged employees’ unions or associations. The bottom line is that in order to receive any additional performance- based pay flexibility for broad-based employee groups, agencies should have to demonstrate that they have the modern, effective, credible, and validated performance management systems in place that are capable of supporting such decisions. Unfortunately, most federal agencies are a long way from meeting this requirement. As I noted earlier, the SES needs to lead the way in the federal government’s effort to better link pay to performance. Given the state of agencies’ performance management systems, Congress should consider starting federal results-oriented pay reform with the SES. Agencies should be granted the authority to implement additional pay for performance programs only after they have demonstrated that they have appropriate performance management systems and adequate safeguards in place. Building such systems and safeguards will likely require making targeted investments in agencies’ human capital programs, as GAO’s own experience has shown. In that regard, Congress and the Administration should consider how incentives can be provided to encourage agencies to modernize their performance management systems. This could include a potential governmentwide fund for such purposes, which could be allocated based on specific business case proposals by individual agencies. This approach could also help to facilitate implementation of the high-performing organization (HPO) concept recommended by the Commercial Activities Panel that I chaired. A report we prepared at the request of Chairman Voinovich and Chairwoman Davis that was released last month shows specific practices that leading public sector organizations both here in the United States and abroad have used in their performance management systems to link individual performance and organizational success. These practices include the following: 1. Align individual performance expectations with organizational goals. An explicit alignment helps individuals see the connection between their daily activities and organizational goals. 2. Connect performance expectations to crosscutting goals. Placing an emphasis on collaboration, interaction, and teamwork across organizational boundaries helps strengthen accountability for results. 3. Provide and routinely use performance information to track organizational priorities. Individuals use performance information to manage during the year, identify performance gaps, and pinpoint improvement opportunities. 4. Require follow-up actions to address organizational priorities. By requiring and tracking follow-up actions on performance gaps, organizations underscore the importance of holding individuals accountable for making progress on their priorities. 5. Use competencies to provide a fuller assessment of performance. Competencies define the skills and supporting behaviors that individuals need to effectively contribute to organizational results. 6. Link pay to individual and organizational performance. Pay, incentive, and reward systems that link employee knowledge, skills, and contributions to organizational results are based on valid, reliable, and transparent performance management systems with adequate safeguards. 7. Make meaningful distinctions in performance. Effective performance management systems strive to provide candid and constructive feedback and the necessary objective information and documentation to reward top performers and deal with poor performers. 8. Involve employees and stakeholders to gain ownership of performance management systems. Early and direct involvement helps increase employees’ and stakeholders’ understanding and ownership of the system and belief in its fairness. 9. Maintain continuity during transitions. Because cultural transformations take time, performance management systems reinforce accountability for change management and other organizational goals. We in GAO believe it is our responsibility to lead by example. We seek to be in the vanguard of the federal government’s overall transformation efforts, including in the critically important human capital area. We are clearly in the lead at the present time, and we are committed to staying in the lead. We fully recognize that our people are our most valuable asset, and it is only through their combined efforts that we can effectively serve our clients and our country. By managing our workforce strategically and focusing on achieving positive and measurable results, we are helping to maximize our own performance and ensure our own accountability. By doing so, we also hope to demonstrate to other federal agencies that they can make similar improvements in the way they manage their people. We have identified and made use of a variety of tools and flexibilities, some of which were made available to us through the GAO Personnel Act of 1980 and our calendar year 2000 human capital legislation, but most of which are available to all federal agencies. The most prominent change in human capital management that we implemented as a result of the GAO Personnel Act of 1980 was a broadbanded pay-for-performance system. The primary goal of this system is to base employee compensation primarily on the knowledge, skills, and performance of individual employees. It also provides managers flexibility to assign employees in a manner that is more suitable to multi-tasking and the full use of staff. Under our current broadbanded system, analyst and analyst-related staff in Grades 7 through 15 were placed in three bands. While our general experience has been positive, we expect to modify our banded system in the future based on our experience to date. In January 2002, we implemented a new competency-based performance management system that is intended to link employee performance and our strategic plan and agency core values. It includes 12 competencies that our employees overwhelmingly validated as the keys to meaningful performance at GAO. (See fig. 2.) Modernizing performance management systems in the federal government is essential to the overall government transformation effort. Importantly, doing so can be accomplished without any additional legislation. Our October 2000 legislation gave us additional tools to realign our workforce in light of mission needs and overall budgetary constraints; correct skills imbalances; and reduce high-grade, managerial, or supervisory positions without reducing the overall number of GAO employees. This legislation allowed us to create a technical and scientific career track at a compensation level consistent to the SES. It also allowed us to give greater consideration to performance and employee skills and knowledge in any reduction-in-force actions. Since the legislation was enacted, we have established agency regulations and offered voluntary early retirement opportunities. Once employees registered their interest in participating in the program, we considered a number of factors, including employee knowledge, skills, performance, and competencies; the organizational unit or subunit in which an employee worked; an employee’s occupational series, grade, or band level, as appropriate; and the geographic location of the employee. As authorized by the 2000 legislation, employee performance was just one of many factors we considered when deciding which employees would be allowed to receive the incentives. However, let me assure you, we did not use performance to target certain individuals. Early retirement was granted to 52 employees in fiscal year 2002 and 24 employees in fiscal year 2003. Our annual performance and accountability reports have provided additional information on our use of this authority. As required by the 2000 legislation, we will shortly be providing Congress a more comprehensive assessment of our use of the authorities granted to us under the act. We are also using many recruiting flexibilities that are available to most agencies, including an extensive campaign to increase our competitiveness on college campuses and extending offers of employment during the fall semester to prospective employees who will come on board the following spring and summer. We are also using our internship program in a strategic fashion, and we often offer permanent positions to GAO interns with at least 10 weeks of highly successful work experience. Moreover, we are building and maintaining a strong presence of both senior executives and recent graduates on targeted college campuses. We have also taken steps to streamline and expedite our hiring process. Even after we hire good people, we need to take steps to retain them. We have taken a number of steps to empower and invest in our employees. For example, we have active employee feedback and suggestion programs. In addition, we implemented a student loan repayment assistance program for employees who have indicated interest and are willing to make a 3-year commitment to staying with the agency. Overall, we have implemented a number of human capital initiatives, including the following, some of which are relatively recent and some of which are long-standing: Prepared a human capital profile and needs assessment to understand employee demographics and distribution. Conducted agencywide, confidential, and web-based employee surveys in 1999 and 2002 to understand the status and progress of the agency and the areas in which we need to improve. Completed a knowledge and skills inventory for all employees. Achieved a democratically elected Employee Advisory Council to facilitate open communication and direct input from line employees to the Comptroller General and other GAO senior leaders on matters of mutual interest and concern. Conducted an employee preference survey so that employees could be given the opportunity to work in the areas that interest and energize them in light of our institutional needs. Implemented an Executive Candidate Development Program to prepare candidates for assignments in the SES. Developed and implemented a strategy to place more emphasis on diversity in campus recruiting. Initiated a Professional Development Program for newly hired GAO analysts to help them transition and progress. Began developing a core training curriculum to directly link and support our validated core competencies. Provided an on-site child care center called “Tiny Findings” and a wellness and fitness center. Implemented additional employee-friendly benefits such as business casual dress, flextime, and public transportation subsidies. Implemented a program to reimburse GAO employees for the cost incurred in pursuit of relevant professional certifications. Used recruitment bonuses, retention allowances, and student loan repayment assistance to attract and retain employees with specialized skills. Implemented a new “state of the art” performance appraisal system that is linked to our strategic plan and based on key competencies, which have been validated by our employees. This new system has been implemented for analysts. This system is being adapted for our attorneys, and we have begun modifying the system for our administrative professional and support staff. Many of the above initiatives required one-time investments to make them a reality. We worked with the Congress to present a business case for funding a number of these initiatives. Fortunately, the Congress has supported these and other GAO transformation efforts. The result is a stronger, better positioned, more effective, results-oriented, and respected GAO. As we engage in these changes, we also know that we are not perfect and we never will be. This is a work-in-progress for us as it is for others. In fact, we are constantly evaluating our internal efforts, seeking to learn from others, and making refinements as we go along. In that regard and as you know, we expect in the coming weeks to be formally approaching Congress with recommendations to provide us with additional statutory authorities to enable us to better manage our people. The legislation we plan to recommend would, among other things, facilitate GAO’s continuing efforts to recruit and retain top talent, develop a more performance-based compensation system, help realign our workforce, and facilitate our succession planning and knowledge transfer efforts. We believe that these authorities will strengthen our efforts to serve Congress and provide benefits to the American people. As has been the case in the past, we also expect that our use of these authorities will provide valuable lessons to Congress and agencies on how human capital flexibilities can be used in a context that helps an organization achieve its missions while still ensuring that adequate safeguards, including reasonable transparency and appropriate accountability mechanisms, are in place to prevent abuse. For further information regarding this testimony, please contact J. Christopher Mihm, Director, Strategic Issues, on (202) 512-6806 or at [email protected]. Individuals making key contributions to this testimony included William Doherty, Bruce Goddard, Judith Kordahl, Janice Lichty, Michael O’Donnell, Susan Ragland, Lisa Shames, Edward H. Stephenson, Jr., and Michael Volpe.
Federal employees represent the government's knowledge base, drive its capacity to perform, and define its character, and as such, are its greatest asset. The early years of the 21st century are proving to be a period of profound transition for our world, our country, and our government. In response, the federal government needs to engage in a comprehensive review, reassessment, reprioritization, and as appropriate, reengineering of what the government does, how it does business, and in some cases, who does the government's business. Leading public organizations here and abroad have found that strategic human capital management must be the centerpiece of any serious change management initiative and effort to transform the cultures of government agencies. In response to a Congressional request, GAO discussed the status of the federal government's efforts to address high-risk human capital weaknesses, possible short- and longer-term legislative solutions to those weaknesses, and other human capital actions that need to be taken to ensure that federal agencies are successfully transformed to meet current and emerging challenges. Since GAO designated strategic human capital management as a governmentwide high-risk area in January 2001, Congress, the administration, and agencies have taken a number of steps to address the federal government's human capital shortfalls. In fact, more progress in addressing the government's long-standing human capital challenges was made in the last 2 years than in the last 20, and GAO is confident that more progress will be made in the next 2 years than the last 2 years. Despite the building momentum for comprehensive and systematic reforms, it remains clear that today's federal human capital strategies are not yet appropriately constituted to meet current and emerging challenges or to drive the needed transformation across the federal government. The basic problem is the long-standing lack of a consistent strategic approach to marshaling, managing, and maintaining the human capital needed to maximize government performance and assure its accountability. Committed and sustained leadership and persistent attention on behalf of all interested parties will continue to be essential to building on the progress that has been and is being made. Congress has had and will need to continue to have a central role in improving agencies' human capital approaches. The basic principles underlying the legislative proposals Congress is considering have merit. Collectively, these proposals would make a positive contribution to addressing high-risk human capital issues and advancing the needed cultural transformation across the federal government. At the same time, additional safeguards should be considered by Congress in order to prevent potential abuse. Moreover, certain additional proposals should be considered as part of this legislative package. Looking forward, the time has come to seriously explore more market- and performance-based approaches to federal pay. As part of this exploration, we need to continue to experiment with providing agencies with the flexibility to pilot alternative approaches to setting pay and linking pay to performance. A more performance-based approach to Senior Executive Service pay would be a good place to start. The bottom line, however, is that in order to receive any additional performance-based pay flexibility for broad-based employee groups, agencies should have to demonstrate that they have modern, effective, credible, and validated performance management systems, with adequate safeguards, including reasonable transparency and appropriate accountability mechanisms in place, that are capable of supporting such decisions. Unfortunately, most federal agencies are a long way from meeting this requirement. GAO, on the other hand, has taken numerous steps to meet this requirement and is well positioned to experiment with additional pay for performance flexibility.
Employer-sponsored coverage is the predominant source of health insurance in the United States. In 2001, 67 percent of all nonelderly adults (over 118 million) and 64 percent of all children (46 million) obtained health insurance through an employer (see fig. 1). Nearly all large firms and almost half of smaller firms offer health insurance coverage for their employees. Federal tax laws provide incentives for employers to pay some or all of the premiums because their contributions are tax deductible as a business expense; the employer-paid portion of the premiums is also not considered taxable income for employees. Although the share of the premiums paid by employers varies with the size of the firm and the type of health plan, firms pay an average of more than 80 percent of the premiums for single coverage and more than 75 percent for family coverage. Also, for many individuals, the premiums for employment-based insurance are lower than those in the private market for comparable individual coverage. Low-income individuals without access to employer-based insurance coverage may qualify for Medicaid or SCHIP. These public insurance financing programs covered over 40 million low-income people at a cost of about $232 billion in federal and state expenditures in 2001. Established in 1965, Medicaid is a joint federal-state entitlement program that finances health care coverage for certain low-income individuals. Medicaid eligibility is based in part on family income and assets. States set their own eligibility criteria within broad federal guidelines. For example, states vary in the kind and amount of income they exclude from consideration when determining eligibility. Similarly, while some states set a ceiling on the value of assets—such as cars, savings accounts, or retirement income—that individuals may have available to them in order to be deemed eligible for Medicaid, other states have no asset test for eligibility. To the extent that asset tests are present in a state’s Medicaid program, individuals would need to “spend down” or dispose of their assets to become eligible for Medicaid. More than half of the individuals enrolled in Medicaid are children. Federal law requires states to provide Medicaid coverage to children age 5 and under if their family income is at or below 133 percent of the federal poverty level and to children age 6 to 19 in families with incomes at or below the federal poverty. Most states have received federal approval to set income eligibility thresholds that expand their Medicaid programs beyond the minimum federal statutory levels for children. Medicaid eligibility for nondisabled adults is more limited. Federal law requires states to provide Medicaid coverage to pregnant women up to 133 percent of the federal poverty level, and mandatory eligibility for parents is linked to the Medicaid family coverage category established in the 1996 federal welfare reform law. At a minimum, federal law requires states to offer Medicaid coverage to parents in families that meet the income and other eligibility rules that the state had in place on July 16, 1996, for determining eligibility for welfare assistance. Nationwide, considerable variation in Medicaid eligibility thresholds for parents exists. For example, Alabama covers parents whose family income is up to 13 percent of the federal poverty level. At the other end of the spectrum, Minnesota covers parents with family incomes up to 275 percent of the federal poverty level. The Medicaid statute does not generally provide for mandatory or optional coverage of nondisabled childless adults. However, some states have received federal approval to expand their Medicaid programs to include coverage for some of them. In 1997, the Congress created SCHIP to provide health coverage to children living in families whose incomes exceed the eligibility limits for Medicaid. While SCHIP is generally targeted to children in families with incomes at or below 200 percent of the federal poverty level, each state may set its own income eligibility limits, within certain guidelines. As of January 2002, states’ upper income eligibility threshold for SCHIP ranged from 133 to 350 percent of the federal poverty level. Unlike Medicaid, which entitles all those eligible to coverage, SCHIP has a statutory funding limit of $40 billion over 10 years (fiscal years 1998 through 2007). Under SCHIP, states can cover the entire family—including parents or custodians of eligible children—if it is cost-effective to do so, meaning that the expense of covering both adults and children in a family does not exceed the cost of covering just the children. Similar to Medicaid, states can obtain federal approval of SCHIP expansions through a section 1115 waiver. While more than 85 percent of Americans obtain health insurance coverage from the private insurance market or public programs, 40.9 million nonelderly Americans (16.5 percent) had no health insurance in 2001. Approximately 75 percent of the uninsured nonelderly adults had jobs. Individuals working part time, for small firms, or in certain industries, such as agriculture or construction, were more likely to be uninsured (see table 1). Young adults, minorities, and low-income persons were also more likely to be uninsured. The percentage of uninsured is generally higher in the South and West and lower in the Midwest and Northeast (see fig. 2). Texas had the highest uninsured rate of nonelderly Americans (25.9 percent) of any state in 2001, while Iowa had the lowest (8.7 percent). From March 2001 to March 2002, the national unemployment rate increased 1.4 percentage points, from 4.3 percent to 5.7 percent, with nine states experiencing above-average increases. The largest percentage point increases occurred in Colorado (2.6), Oregon (2.5), and Utah (2.0) (see table 2). Across the six states we reviewed—Colorado, New Jersey, North Carolina, Ohio, Oregon and Utah—the greatest unemployment increases were generally seen in manufacturing, construction, and transportation and public utilities (see table 3). Unemployed individuals may be eligible for financial assistance through the Unemployment Insurance Program, a federal-state partnership designed to partially replace the lost earnings of individuals who become unemployed through no fault of their own. While program requirements vary by state, individuals eligible for unemployment insurance generally (1) have worked for a specified period in a job covered by the program, (2) left the job involuntarily, and (3) are available, able to work, and actively seeking employment. Most states provide a maximum of 26 weeks of benefits, although benefits in some states have been extended for an additional 13 weeks in times of high unemployment. Benefits are generally based on a percentage of an individual’s earnings over the prior year, up to a maximum amount. The national average weekly unemployment benefit was $254 in the first quarter of 2002, with benefits lasting an average of nearly 15 weeks. In the six states we reviewed, the weekly unemployment benefit ranged from $253.80 in Ohio to $327.15 in New Jersey (see table 4). Although many aspects of health insurance, including premiums, are regulated at the state level, two federal laws— the Consolidated Omnibus Budget Reconciliation Act of 1985 (COBRA) and the Health Insurance Portability and Accountability Act of 1996 (HIPAA)—established requirements designed to help certain individuals maintain health coverage after loss of employment. COBRA provided that firms with 20 or more employees offer former employees and their dependents the opportunity to continue their group coverage for at least 18 months. To qualify for COBRA benefits, former employees must have been covered by the employer’s plan the day before they stopped working at the firm. Former employees are eligible only for the health plan coverage that they received while employed. COBRA coverage is not available if the former employer discontinues health benefits to all employees, as in a company closure. While employers must allow COBRA-eligible former employees to continue receiving coverage under the employer’s group health plan, the employer does not have to pay for it. The former employee can be required to pay the full cost of the group health premium plus 2 percent, which is designed to cover the employer’s administrative cost of keeping the former employee in the plan. Based on data from a 2002 survey of employers, the average cost of COBRA coverage is approximately $260 a month for an individual and $676 a month for a family. Based on a survey of a national sample of 1,001 nonelderly adults, a recent study estimated that because of the cost of COBRA continuation coverage, “only 23 percent of employed, insured adults would be very likely to participate in the COBRA program if they lost their jobs.” Unlike COBRA, which provided the opportunity for individuals losing their jobs to continue their private group health insurance, HIPAA provisions guarantee certain individuals losing group coverage the right to purchase coverage in the individual market. HIPAA provides guaranteed access to health coverage for individuals who, among other criteria, had at least 18 months of coverage without a break of more than 63 days and with the most recent coverage being under a group health plan. HIPAA stipulates that states must either require health insurers to make certain of their policies available to qualifying individuals or use an “alternative mechanism” to offer them coverage. An example of an alternative mechanism is a state-sponsored high-risk pool, which offers comprehensive insurance coverage to individuals with preexisting health conditions who are otherwise unable to obtain coverage in the individual market or who may be able to obtain coverage only at a prohibitive cost. (Appendix I describes how the six states that we reviewed guarantee access to coverage under HIPAA.) As with COBRA, individuals bear the full cost of individual coverage received under HIPAA. Since HIPAA provides for coverage in the individual insurance market, in which premiums are generally based on the characteristics of the individual applicant, this coverage is likely to be more costly for many applicants for a similar level of coverage than premiums for groups, where risk is spread over all members of the group. The differences will be smaller in some states that have imposed restrictions on how much insurers can vary premiums based on an individual’s characteristics. The six states we reviewed had instituted various protections that might assist individuals who have lost their jobs in maintaining or obtaining health insurance. Unemployed individuals, however, generally bore the full cost of the premium. States did not have data on the number of individuals who lost their health insurance during the economic decline and thus, who could benefit from these protections, but did have data on the number of individuals using some of the protections. The six states we reviewed had in place a variety of protections, which were established prior to the economic downturn. Unemployed individuals, however, were generally responsible for bearing the full costs of purchasing health insurance. Key protections to assist unemployed individuals in maintaining health insurance coverage included: State-mandated continuation coverage, through which states require small businesses to extend their group health coverage to former employees and their families if the former employees pay for it; Guaranteed conversion, through which states require insurers to give eligible individuals the ability to convert their group coverage to an individual health insurance policy; Guaranteed issue, through which states require insurers to offer coverage to individuals who do not have access to group coverage or public insurance; and High-risk pools, in which states create associations that offer comprehensive health insurance benefits to individuals with acute or chronic health conditions. Table 5 indicates the extent to which the six states we reviewed had adopted such protections. Of the six states we reviewed, only Oregon assisted lower income unemployed individuals in paying for the cost of premium coverage. Previously funded solely with state resources, the program was unable to expand enrollment for nearly 3 years and had a significant waiting list due to budget constraints. However, in October 2002, Oregon received approval to expand this program using federal funds. Each of the six states that we reviewed had a health care coverage continuation law, which applied to employers with fewer than 20 employees and thus were not subject to COBRA requirements. While the states required that employers make health insurance coverage available to eligible individuals, the employers were not required to pay for this coverage. In New Jersey, North Carolina and Utah, eligible individuals can be required to pay up to 102 percent of the cost of the premium charged under their former employer’s plan (the full cost of the group health premium plus a 2 percent fee to cover the employer’s administrative costs) (see table 6). In the other three states, individuals may be required to pay up to the full cost of the premium, but no administrative fee may be added. Like COBRA, the state health care coverage continuation laws did not apply to companies that terminate coverage, such as when going out of business. Nationally, premiums for state continuation coverage averaged approximately $260 a month for an individual and $676 a month for a family in 2001, which equals 24 to 61 percent of the average unemployment benefit. Eligibility for, and the length of required coverage under, states’ continuation coverage laws were often more limited than under COBRA. While under COBRA individuals must only be insured the day before they stop working, five of the six states that we reviewed had more stringent requirements. They required individuals to have been continuously insured for the 3 to 6 months immediately prior to the separation from their job. New Jersey, Ohio, Oregon, and Utah required employers to offer a year or less of continuation coverage, compared to 18 months under COBRA and in Colorado and North Carolina. Once individuals exhaust their COBRA or state health care continuation coverage, they may become eligible to convert to an individual policy. Although the HIPAA provisions require states to ensure that eligible individuals can move from group to individual health insurance coverage, state guaranteed conversion is specific to an insurer. Four of the six states we reviewed (Colorado, North Carolina, Ohio, and Utah) required insurers to provide individual policies to eligible individuals previously covered under a group policy sold by their company. To be eligible for guaranteed conversion, individuals had to have been continuously insured by the group health plan, or its predecessor, for 3 to 12 months (depending on the state) prior to their application for conversion—requirements that are less stringent than the 18 months of prior continuous coverage under HIPAA. State laws on guaranteed conversion contained no maximum length of required coverage; as with other individual health insurance policies, beneficiaries could renew the policies as long as they agreed to continue paying the premiums and did not commit fraud. Individuals were responsible for the conversion plan premiums, which could generally be based on the demographic and health characteristics of the individual. Thus, individual coverage under conversion policies—for which individuals pay the full premium—was generally more expensive than group coverage especially for higher-risk individuals. Of the six states we reviewed, New Jersey and Ohio had “guaranteed issue,” which required insurers to offer coverage to all individuals in the state who were not eligible for group coverage or public insurance programs, if they were willing to pay for it. According to Ohio statute, insurers in that state could charge an individual up to 2.5 times the rate charged to another individual with a similar policy. In New Jersey, insurers were required to charge each applicant the same price for five standard plans, but monthly premiums varied by insurer. For a policy issued by a health maintenance organization (HMO) in New Jersey, with a $30 copayment per visit to the doctor, monthly premiums for single coverage ranged from $324 to more than $394, depending on the insurer, while premiums for the other standard health plans were more expensive. (A comparison of the five standard plans is in table 7.) In the four states we reviewed that did not have guaranteed issue laws, insurance companies could choose not to offer coverage to individual applicants and have few or no restrictions on what they could charge individuals based on their health status, age, or other factors. Three of the states we reviewed (Colorado, Oregon, and Utah) have established high-risk pools that served individuals with acute and chronic conditions. The high-risk pools in these three states began operation in the early 1990s and also served individuals eligible for coverage under HIPAA (see table 8). High-risk pools are subsidized. Because enrollees often have major health problems, medical claims costs are high and would exceed unsubsidized premiums collected from their enrollees. Oregon’s risk pool was subsidized by a fee assessed on insurers based on the number of people they cover. Utah subsidized the operation of its high- risk pool with state funds. Colorado used a combination of these approaches. High-risk pool premiums are higher than standard premiums for individual insurance paid by healthy applicants although not necessarily higher than a high-risk individual would be charged in the individual market if coverage were available. State high-risk pool laws generally capped premiums at 125 to 200 percent of comparable standard commercial coverage rates. Premiums varied based on factors such as age, geographic location, type of health plan, and deductible. One state, Colorado, provided a 20 percent premium discount to certain low-income individuals. Across the three states we reviewed that had high-risk pools, undiscounted premiums for nonelderly adults ranged from less than 10 percent to close to 100 percent of the average unemployment benefit in the state. Although Ohio, Oregon, and Utah collected data on the number of uninsured residents, none of the states that we reviewed had data sufficiently current to determine how many of their residents had lost health insurance during the recent economic decline. States’ knowledge of any changes in the numbers of individuals benefiting from the different states’ protections varied by option and the state, with data most often available for the three states’ high-risk pools. None of the states we reviewed tracked how many of its residents obtained health coverage through state-mandated continuation coverage. Of the four states that required insurers to offer conversion plans, only Utah tracked the number of policies issued but it did not have data current enough to determine whether usage increased during the current economic decline. New Jersey tracked the number of individuals receiving individual health coverage through its five standard plans. Enrollment in these standard plans declined in the past year, which a state representative attributed to the rising cost of coverage. Each of the three states we reviewed that had high-risk pools tracked enrollment in their pools. From March 2001 to March 2002, enrollment in high-risk pools increased by 47 percent in Colorado, almost 23 percent in Oregon, and 37 percent in Utah. But it is not clear how much of the increased participation came from the ranks of the unemployed. For example, a Colorado official said a large portion of the increased enrollment in the state’s high-risk pool was likely due to insurers leaving the individual and small group health insurance market in the state. Therefore, it is difficult to determine how much of the increase included those dropped from individual or nonemployer-based group coverage and how much included the newly unemployed. Given the cost of maintaining coverage under their former employers’ health insurance plan or obtaining alternative coverage, unemployed individuals may look to states’ Medicaid and SCHIP programs for coverage for themselves and their families. Unemployed adults, however, are less likely to qualify for these programs than their children due, in part, to less generous eligibility levels set for adults than for children. Colorado, Oregon, and Utah have recently received federal approval for waivers to expand eligibility for adults in Medicaid and SCHIP, which may increase coverage for unemployed individuals. In the wake of recent fiscal pressures resulting from the economic downturn, however, New Jersey has suspended its Medicaid and SCHIP coverage expansion for new applicants. Efforts by some states to expand Medicaid and SCHIP coverage for uninsured adults have raised significant federal fiscal and legal issues, at times providing adult coverage with funds intended for children. As unemployed adults seek health insurance, they will likely find it more difficult to secure coverage under Medicaid or SCHIP for themselves than for their children. Under Medicaid, the majority of states had set eligibility levels for nondisabled adults that were less generous than those for children. In the six states we reviewed, Medicaid’s maximum income eligibility levels for non-disabled adults were lower than the levels for children. In Colorado, New Jersey, North Carolina, and Utah, the maximum income levels for coverage for these adults were under 50 percent of the federal poverty level. In contrast, Medicaid and SCHIP coverage for children ranged from those in families with incomes up to 170 percent of the federal poverty level to those in families with incomes up to 350 percent of the federal poverty level (in Oregon and New Jersey, respectively). In four of six states, adults eligible for unemployment benefits might not have qualified for Medicaid because the average of their unemployment benefits would have been at least twice as much income as allowed for Medicaid eligibility. In the remaining two states—Ohio and Oregon— adults that received the average unemployment benefit would have met the income eligibility requirements for Medicaid in those states (see table 9). In Colorado, North Carolina, Oregon, and Utah, Medicaid coverage for unemployed adults was more restricted than it was for children because adults’ accumulated assets could have made them ineligible for coverage even after their unemployment benefits run out. The amount of assets allowed and the types of assets included for eligibility purposes varied by state (see table 10). For purposes of determining whether individuals reached or exceeded their asset limit, North Carolina included the cash value of life insurance, checking and savings accounts, and other investments, but excluded the value of an applicant’s primary residence and vehicle. Utah required that families with children over age 6 have assets below $3,000 (with allowances for an additional $25 in assets for each additional family member) but excluded the value of one home and of one vehicle, up to $15,200. In contrast, most states nationwide have eliminated family asset tests in determining Medicaid and SCHIP eligibility for children. As of January 2002, 44 states had eliminated family asset tests for all children in families with incomes at or below the poverty level and two other states dropped it for certain categories of children. Among the six states we reviewed, four states did not have asset tests for children in Medicaid, while five states did not have asset tests for children in SCHIP (see table 11). Among unemployed adults, childless adults often had more difficulty qualifying for Medicaid than parents. The Medicaid programs in Colorado, North Carolina, and Ohio did not cover any nondisabled childless adults. In New Jersey, childless adults faced a lower Medicaid income eligibility level than parents did. Oregon and Utah covered a small number of childless adults, all of whom earned less than 150 percent of the federal poverty level (see table 12). Some states have received approval from the federal government to expand Medicaid and SCHIP coverage for parents and childless adults, including recently unemployed individuals. Of the states we reviewed, Utah recently received a section 1115 waiver to expand Medicaid coverage to certain parents and childless adults for a benefit package limited to primary care and preventive services. Utah’s waiver is estimated to cover an additional 16,000 parents with family incomes under 150 percent of the federal poverty level and 9,000 childless adults with incomes under 150 percent of the federal poverty level. The expansion, implemented on July 1, 2002, is funded by enrollment fees and cost sharing by participants and savings from increased cost sharing and new limits on some optional services, such as mental health services, vision screening and physical therapy, for certain groups of currently eligible adults. On September 27, 2002, Colorado received approval to cover pregnant women with family income between 134 and 185 percent of the federal poverty level using SCHIP funds. Oregon also received approval on October 15, 2002, for a section 1115 waiver to expand insurance coverage for adults and children up to 185 percent of the federal poverty level using Medicaid and SCHIP funds. Oregon expects to cover an additional 60,000 individuals, but plans to phase in implementation of this expansion. On November 1, 2002, the state plans to expand its premium assistance program by paying between 50 and 95 percent of premiums for eligible individuals with incomes up to 185 percent of the federal poverty level, using both Medicaid and SCHIP funds. On February 1, 2003, Oregon plans to expand Medicaid and SCHIP eligibility to pregnant women and children with incomes up to 185 percent of the federal poverty level, and to other eligible individuals, including parents and childless adults, with incomes up to 110 percent of the federal poverty level. Further eligibility expansions may occur each quarter depending upon the availability of state funding. A state that has used a waiver to expand Medicaid and SCHIP coverage may be prompted by shortfalls in its budget to limit these expansions. Of the states we reviewed, in January 2001, New Jersey expanded Medicaid and SCHIP coverage for parents earning up to 200 percent of the federal poverty level. In June 2002, however, New Jersey suspended new enrollment of adults in this program, increased the premiums and reduced the benefits for those already covered under the expansion. New Jersey’s program had exceeded the state’s 3-year enrollment projection in 9 months. Section 1115 waivers to expand insurance coverage under Medicaid and SCHIP can extend coverage to adults who would not otherwise qualify and who would have difficulty obtaining coverage elsewhere. However, we reported earlier that some waivers are inconsistent with the goals of the Medicaid and SCHIP programs and may compromise their fiscal integrity. For example, in approving Utah’s expansion, we concluded that HHS did not adequately ensure that the waiver would be budget neutral as required for approval. We estimated that Utah’s waiver, if fully implemented, could cost the state and federal governments $59 million more than without the waiver. We found that the state’s projection of what it would have spent without the waiver inappropriately included the estimated cost of services for a new group of people who were not being covered under the state’s existing Medicaid program. Although we did not review Colorado and Oregon’s waiver applications in our earlier report, we raised a broader legal issue about states’ use of SCHIP funds to cover adults without children, which Oregon’s recently approved expansion will do. In our earlier report, we found that HHS had approved an Arizona waiver proposal that would, among other things, use unspent SCHIP funding to cover adults without children, despite SCHIP’s statutory objective to expand health care coverage to low-income children. In our view, HHS’s approval of the waiver to cover childless adults is not consistent with this objective, and is not authorized. Consequently, we recommended that the Secretary of Health and Human Services not approve any more waivers that would use SCHIP funds for childless adults. In addition, we suggested that the Congress amend the Social Security Act to specify that SCHIP funds are not available to provide health insurance for childless adults. Health insurance for the majority of Americans who rely on employer- based coverage could be threatened upon job loss. Federal and state laws provide some protections that are aimed at helping individuals maintain or obtain health insurance coverage in such circumstances. The protections offered, however, are not without limitations as individuals may find that bearing the full cost of the premiums—with no employer or state subsidies—may be beyond their financial means. While those who cannot afford health insurance may look to Medicaid or SCHIP for assistance, coverage for adults is hampered by limited income eligibility and other requirements, such as asset tests, that are likely to reduce the number of adults that can qualify for coverage. Some states have made recent efforts to use the flexibility available to them under Medicaid and SCHIP to expand their programs to help cover increased numbers of uninsured adults. Tighter budgets, however, are beginning to constrain some states’ ability to sustain insurance coverage expansions initiated during stronger economic times. Thus, despite program expansions, coverage under Medicaid and SCHIP may not be available to unemployed adults, while other state coverage options may be too costly for these individuals. We provided a draft of this report for technical review to representatives of insurance departments, high-risk pools, and Medicaid programs in the six states we reviewed. Each of the states provided technical comments, which we incorporated as appropriate. In addition, in its comments, Utah disagreed with our statement—based on findings in an earlier report—that HHS did not adequately ensure that the state’s section 1115 waiver met the budget neutrality test. The state contends that its waiver is budget neutral and is consistent with long- standing HHS budget neutrality practices. Since 1995, we have expressed concern that HHS’s methods for assessing budget neutrality allow the inclusion of certain costs that inappropriately inflate cost estimates and result in the federal government being at risk to spend more than it would have had the waivers not been approved. We believe that continued use of these methods is inconsistent with the long-standing requirement for section 1115 waivers to be budget neutral and inappropriately places the federal government at risk of increased cost for the Medicaid and SCHIP programs. We did not obtain comments from HHS on this report because we did not evaluate HHS’ role or performance with respect to protections or programs that may benefit unemployed individuals. As agreed with your offices, unless you publicly announce its contents earlier, we will plan no further distribution of this report until 30 days after its date. At that time we will send copies to other interested congressional committees and other parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http:// www.gao.gov. If you or members of your staff have any questions regarding this report, please contact me on (202) 512-7114 or Carolyn Yocom on (202) 512-4931. Other major contributors to this report include JoAnn Martinez-Shriver, Michael Rose, and Michelle Rosenberg. HIPAA provides guaranteed access to coverage—”portability” from group to individual coverage—to eligible individuals who, among other criteria, had at least 18 months of coverage without a break of more than 63 days. Recognizing that many states had already passed reforms that could be modified to meet or exceed these requirements, HIPAA gave states the flexibility to implement this provision by using either the federal fallback or an alternative mechanism. Under the federal fallback approach, insurers must offer eligible individuals guaranteed access to coverage in one of three ways. HIPAA specified that a carrier must offer eligible individuals (1) all of its individual market plans, (2) only its two most popular plans, or (3) two representative plans—a lower-level and a higher-level coverage option— that are subject to a risk spreading or financial subsidization mechanism. According to a 2002 report, 11 states opted for the federal fallback approach. Under an alternative mechanism, states may design their own approach to guarantee coverage to eligible individuals as long as certain minimum requirements are met. Essentially, the approach chosen must ensure that eligible individuals have guaranteed access to coverage with a choice of at least two different coverage options. For example, one possible alternative mechanism is a state high-risk pool. As shown in table 13 only one of the six states we reviewed relied on the federal fallback approach to ensure group-to-individual portability. The remaining states either relied on their high-risk pool, another alternative mechanism, or both. Medicaid and SCHIP: Recent HHS Approvals of Demonstration Waiver Projects Raise Concerns, GAO-02-817. Washington, D.C.: July 12, 2002. Health Insurance: Characteristics and Trends in the Uninsured Population, GAO-01-507T. Washington, D.C.: March 13, 2001. Health Insurance Standards: New Federal Law Creates Challenges for Consumers, Insurers, Regulators, GAO/HEHS-98-67. Washington, D.C.: February 25, 1998. Medicaid Section 1115 Waivers: Flexible Approach to Approving Demonstrations Could Increase Federal Costs, GAO/HEHS-96-44. Washington, D.C.: November 8, 1995.
The six states reviewed had in place a variety of protections, established prior to the economic downturn, to assist unemployed individuals in maintaining health insurance coverage: State-mandated continuation coverage, which required small businesses to extend their group health coverage to former employees and their families who choose to pay for it. Guaranteed conversion, which required insurers to allow eligible individuals to convert their group coverage to individual health insurance policies. Guaranteed issue, which required insurers to offer coverage to those who did not have access to group coverage or public insurance. High-risk pools, state-created associations that offered comprehensive health insurance benefits to individuals with acute or chronic health conditions. However, individuals usually bore the full cost of the premiums, which was usually higher than their premium cost under employer-sponsored plans. For individuals who relied on unemployment benefits as their principal income, premiums absorbed a significant share of the benefit.
There is an increasing demand, coming from the Congress and the public, for a smaller government that works better and costs less. Having valuable, accurate, and accessible financial and programmatic information is a critical element for any improvement effort to succeed. Furthermore, increasing the quality and speed of service delivery while reducing costs will require the government to make significant investments in three fundamental assets—personnel, knowledge, and capital property/fixed assets. Investments in information technology (IT) projects can dramatically affect all three of these assets. Indeed, the government’s ability to improve performance and reduce costs in the information age will depend, to a large degree, on how well it selects and uses information systems investments to modernize its often outdated operations. However, the impact of information technology is not necessarily dependent on the amount of money spent, but rather on how the investments are selected and managed. This, in essence, is the challenge facing federal executives: Increasing the return on money spent on IT projects by spending money wiser, not faster. IT projects, however, are often poorly managed. For example, one market research group estimates that about a third of all U.S. IT projects are canceled, at a estimated cost in 1995 of over $81 billion. In the last 12 years, the federal government has obligated at least $200 billion for information management with mixed results at best. Yet despite this huge investment, government operations continue to be hampered by inaccurate data and inadequate systems. Too often, IT projects cost much more and produce much less than what was originally envisioned. Even worse, often these systems do not significantly improve mission performance or they provide only a fraction of the expected benefits. Of 18 major federal agencies, 7 have an IT effort that has been identified as high risk by either the Office of Management and Budget (OMB) or us. Some private and public sector organizations, on the other hand, have designed and managed IT to improve their organizational performance. In a 1994 report, we analyzed the information management practices of several leading private and state organizations. These leading organizations were identified as such by their peers and independent researchers because of their progress in managing information to improve service quality, reduce costs, and increase work force productivity and effectiveness. From this analysis, we derived 11 fundamental IT management practices that, when taken together, provide the basis for the successful outcomes that we found in leading organizations. (See figure 1.1.) One of the best practices exhibited by leading organizations was that they manage information systems projects as investments. This particular practice offers organizations great potential for gaining better control over their IT expenditures. In the short term (within 2 years), this practice serves as a powerful tool for carefully managing and controlling IT expenditures and better understanding the explicit costs and projected returns for each IT project. In the long term (from 3 to 5 years), this practice serves as an effective process for linking IT projects to organizational goals and objectives. However, managing IT projects as investments works most effectively when implemented as part of an integrated set of management practices. For example, project management systems must also be in place, reengineering improvements analyzed, and planning processes linked to mission goals. While the specific processes used to implement an investment approach may vary depending upon the structure of the organization (e.g., centralized versus decentralized operations), we nonetheless found that the leading organizations we studied shared several common management practices related to the strategic use of information and information technologies. Specifically, they maintained a decision-making process consisting of three phases—selection, control, and evaluation—designed to minimize risks and maximize return on investment. (See figure 1.2.) The Congress has passed several pieces of legislation that lay the groundwork for agencies to establish a investment approach for managing IT. For instance, revisions to the Paperwork Reduction Act (PRA) (Public Law 104-13) have put more emphasis on evaluating the operational merits of information technology projects. The Chief Financial Officers (CFO) Act (Public Law 101-576) focuses on the need to significantly improve financial management and reporting practices of the federal government. Having accurate financial data is critical to establishing performance measures and assessing the returns on IT investments. Finally, the Government Performance and Results Act (GPRA) (Public Law 103-62) requires agencies to set results-oriented goals, measure performance, and report on their accomplishments. In addition, the recently passed Information Technology Management Reform Act (ITMRA) (Division E of Public Law 104-106) requires federal agencies to focus more on the results achieved through IT investments while streamlining the federal IT procurement process. Specifically this act, which became effective August 8 of this year, introduces much more rigor and structure into how agencies approach the selection and management of IT projects. Among other things, the head of each agency is required to implement a process for maximizing the value and assessing and managing the risks of the agency’s IT acquisitions. Appendix V summarizes the primary IT investment provisions contained in ITMRA. ITMRA also heightens the role of OMB in supporting and overseeing agencies’ IT management activities. The Director of OMB is now responsible for promoting and directing that federal agencies establish capital planning processes for IT investment decisions. The Director is also responsible for evaluating the results of agency IT investments and enforcing accountability. The results of these decisions will be used to develop recommendations for the President’s budget. OMB has begun to take action in these areas. In November 1995, OMB, with substantial input from GAO, published a guide designed to help federal agencies systematically manage and evaluate their IT-related investments.This guide was based on the investment processes found at the leading organizations. Recent revisions to OMB Circular A-130 on federal information resources management have also placed greater emphasis on managing information system projects as investments. And the recently issued Part 3 of OMB Circular A-11, which replaced OMB Bulletin 95-03, “Planning and Budgeting for the Acquisition of Fixed Assets,” provides additional guidance and information requirements for major fixed asset acquisitions. The Chairman, Senate Subcommittee on Oversight of Government Management and the District of Columbia, Committee on Governmental Affairs and the Chairman and Ranking Minority Member, House Committee on Government Reform and Oversight, requested that we compare and contrast the management practices and decision processes used by leading organizations with a small sample of federal agencies. The process used by leading organizations is embodied in OMB’s Evaluating Information Technology Investments: A Practical Guide and specific provisions contained in the Information Technology Management Reform Act of 1996. The agencies we examined are the National Aeronautics and Space Administration (NASA) ($1.6 billion spent on IT in FY 1994), National Oceanic and Atmospheric Administration (NOAA) ($296 million spent on IT in FY 1994), Environmental Protection Agency (EPA) ($302 million spent on IT in FY 1994), Coast Guard ($157 million spent on IT in FY 1994), and the Internal Revenue Service (IRS) ($1.3 billion spent on IT in FY 1994). We selected the federal agencies for our sample based on one or more of the following characteristics: (1) large IT budgets, (2) expected IT expenditure growth rates, and (3) programmatic risk as assessed by GAO and OMB. In addition, the Coast Guard was selected because of its progress in implementing an investment process. Collectively, these agencies spent about $3.7 billion on IT in FY 1994—16 percent of the total spent on IT. Our review focused exclusively on how well these five agencies manage information technology as investments, one of the 11 practices used by leading organizations to improve mission performance, as described in our best practices report. As such, our evaluation only focused on policies and practices used at the agencywide level; we did not evaluate the agencies’ performance in the 10 other practices. In addition, we did not systematically examine the overall IT track records of each agency. During our review of agency IT investment decision-making processes, we did the following: reviewed agencies’ policies, practices, and procedures for managing IT investments; interviewed senior executives, program managers, and IRM professionals; and determined whether agencies followed practices similar to those used by leading organizations to manage information systems projects as investments. We developed the attributes needed to manage information systems projects as investments from the Paperwork Reduction Act, the Federal Acquisition Streamlining Act, OMB Circular A-130, GAO’s “best practices” report on strategic information management, GAO’s strategic information management toolkit, and OMB’s guide Evaluating Information Technology Investments: A Practical Guide. Many of the characteristics of this investment approach are contained in the Information Technology Management Reform Act of 1996 (as summarized in appendix V). However, this law was not in effect at the time of our review. To identify effects associated with the presence or absence of investment controls, we reviewed agencies’ reports and documents, related GAO and Inspector General reports, and other external reports. We also discussed the impact of the agencies’ investment controls with senior executives, program managers, and IRM professionals to get an agencywide perspective on the controls used to manage IT investments. Additionally, we reviewed agency documentation dealing with IT selection, budgetary development, and IT project reviews. To determine how much each agency spent on information technology, we asked each agency for information on spending, staffing, and their 10 largest IT systems and projects. The agencies used a variety of sources for the same data elements, which may make comparisons among agencies unreliable. While data submitted by the agencies were validated by agency officials, we did not independently verify the accuracy of the data. Most of our work was conducted at agencies’ headquarters in Washington, D.C. Similarly, we visited NOAA offices in Rockville, Maryland, and the National Weather Service in Silver Spring, Maryland. We also visited NASA program, financial, and IRM officials at Johnson Space Center in Houston, Texas, and Ames Research Center in San Francisco, California, to learn how they implement NASA policy on IT management. We performed the majority of our work from April 1995 through September 1995, with selected updates through July 1996, in accordance with generally accepted government auditing standards. We updated our analyses of IRS and NASA in conjunction with other related audit work. In addition, several of the agencies provided us with updated information as part of their comments on a draft version of the report. Many of these changes have only recently occurred and we have not fully evaluated them to determine their effect on the agency’s IT investment process. We provided and discussed a draft of this report with officials from OMB, EPA, NASA, NOAA, IRS, and the Coast Guard, and have incorporated their comments where appropriate. OMB’s written comments, as well as our evaluation, are provided in appendix I. Appendix II profiles each agency’s IT spending, personnel, and major projects. Appendix III provides a brief description of an IT investment process approach based on work by GAO and OMB. Appendix IV provides a brief overview of each agency’s IT management processes. Because of its relevance to this report, the investment provisions of the Information Technology Management Reform Act of 1996 are summarized in appendix V. Major contributors to this report are listed in appendix VI. All of the agencies we studied—NASA, IRS, the Coast Guard, NOAA, and EPA—had at least elements or portions of an IT investment process in place. For instance, the Coast Guard had a selection process with decision criteria that included an analysis of cost, risk, and return data; EPA had created a executive management group to address cross-agency IT NASA and NOAA utilized program control meetings to ensure senior management involvement in monitoring the progress of important ongoing IT projects; and IRS had developed a systems investment evaluation review methodology and used it to conduct postimplementation reviews of some Tax Systems Modernization projects. However, none of these five agencies had implemented a complete, institutionalized investment approach that would fulfill requirements of PRA and ITMRA. Consequently, IT decision-making at these agencies was often inconsistent or based on the priorities of individual units rather than the organization as a whole. Additionally, cost-benefit and risk analyses were rarely updated as projects proceeded and were not used for managing project results. Also, the mission-related benefits of implemented systems were often difficult to determine since agencies rarely collected or compared data on anticipated versus actual costs and benefits. In general, we found that the IT investment control processes used at the case study agencies at the time of our review contained four main weaknesses. While all four weaknesses may not have been present at each agency, in comparison to leading organizations, the case study agencies lacked a consistent process (used at all levels of the agency) for uniformly selecting and managing systems investments; focused their selection processes on selected efforts, such as justifying new project funding or focusing on projects already under development, rather than managing all IT projects—new, under development, and operational—as a portfolio of competing investments; made funding decisions without giving adequate attention to management control or evaluation processes, and made funding decisions based on negotiations or undefined decision criteria and did not have the up-to-date, accurate data needed to support IT investment decisions. Appendix IV provides a brief overview of how each agency’s current processes for selecting, controlling, and evaluating IT projects worked. Leading organizations use the selection, control, and evaluation decision-making processes in a consistent manner throughout different units. This enables the organization, even one that is highly decentralized, to make trade-offs between projects, both within and across business units. Figure 2.1 illustrates how this process can be applied to the federal government where major cabinet departments may have several agencies under their purview. IT portfolio investment processes can exist at both the departmental and agency levels. As with leading organizations, the key factor is being able to determine which IT projects and resources are shared (and should be reviewed at the departmental level) and which are unique to each agency. Three common criteria used by leading organizations are applicable in the federal setting. These threshold criteria include (1) high-dollar, high-risk IT projects (risk and dollar amounts having been already defined), (2) cross-functional projects (two or more organizational units will benefit from the project), and (3) common infrastructure support (hardware and telecommunications). Projects that meet these particular threshold criteria are discussed, reviewed, and decided upon at a departmentwide level. The key to making this work is having clearly defined roles, responsibilities, and criteria for determining the types of projects that will be reviewed at the different organizational levels. As described in ITMRA, agency heads are to implement a process for maximizing the value and assessing and managing the risks of IT investments. Further, this process should be integrated with the agency’s budget, financial, and program management process(es). Whether highly centralized or decentralized, matrixed or hierarchial, agencies can most effectively reap the benefits of an investment process by developing and maintaining consistent processes within and across their organizations. One of the agencies we reviewed—the Coast Guard—used common investment criteria for making cross-agency IT decisions. IRS had defined some criteria, but was not yet using these criteria to make decisions. The three other agencies—NASA, EPA, and NOAA—chose IT projects based on inconsistent or nonexistent investment processes. There was little or no uniformity in how risks, benefits, and costs of various IT projects across offices and divisions within these three agencies were evaluated. Thus, cross-comparisons between systems of similar size, function, or organizational impact were difficult at best. More important, management had no assurance that the most important mission objectives of the agency were being met by the suite of system investments that was selected. NASA, for instance, allowed its centers and programs to make their own IT funding decisions for mission-critical systems. These decisions were made without an agencywide mechanism in place to identify low-value IT projects or costs that could be avoided by capitalizing on opportunities for data sharing and system consolidation across NASA units. As a result, identifying cross-functional system opportunities was problematic at best. The scope of this problem became apparent as a result of a special NASA IT review. In response to budget pressures, NASA conducted an agencywide internal information systems review to identify cost savings. The resulting March 1995 report described numerous instances of duplicate IT resources, such as large-scale computing and wide area network services, that were providing similar functions. A subsequent NASA Inspector General’s (IG) report, also issued in March 1995, substantiated this special review, finding that at one center NASA managers had expended resources to purchase or develop information systems that were already available elsewhere, either within NASA or, in some cases, within that center itself. While this special review prompted NASA to plan several consolidation efforts, such as consolidating its separate wide area networks (for a NASA projected savings of $236 million over 5 years), the risk of purchasing duplicate IT resources remained because of weaknesses in its current decentralized decision-making process. For example, NASA created chief information officer (CIO) positions for NASA headquarters and for each of its 23 centers. These CIOs have a key role in improving agencywide IT cooperation and coordination. However, the CIOs have limited formal authority and to date have only exercised control over NASA’s administrative systems—which account for about 10 percent of NASA’s total IT budget. With more defined CIO roles, responsibility, and authority, it is likely that additional opportunities for efficiencies will be identified. NASA recently established a CIO council to establish high-level policies and standards, approve information resources management plans, and address issues and initiatives. The council will also serve as the IT capital investment advisory group to the proposed NASA Capital Investment Council. NASA plans for this Capital Investment Council to have responsibility for looking at all capital investments across NASA, including those for IT. NASA’s proposed Capital Investment Council may fill this need for identifying cross-functional opportunities; however, it is too early to evaluate its impact. By having consistent, quantitative, and analytical processes across NASA that address both mission-critical and administrative systems, NASA could more easily identify cross-functional opportunities. NASA has already demonstrated that savings can be achieved by looking within mission-critical systems for cross-functional opportunities. For instance, NASA estimated that $74 million was saved by developing a combined Space Station and Space Shuttle control center using largely commercial off-the-shelf software and a modular development approach, rather than the original plan of having two separate control centers that used mainframe technology and custom software. EPA, like NASA, followed a decentralized approach for making IT investment decisions. Program offices have had control and discretion over their specific IT budgets, regardless of project size or possible cross-office impact. As we have previously reported, this has led to stovepiped systems that do not have standard data definitions or common interfaces, making it difficult to share environmental data across the agency. This is important because sharing environmental data across the agency is crucial to implementing EPA’s strategic goals. In 1994, EPA began to address this problem by creating a senior management Executive Steering Committee (ESC) charged with ensuring that investments in agencywide information resources are managed efficiently and effectively. This committee, comprised of senior EPA executives, has the responsibility to (1) recommend funding on major system development efforts and (2) allocate the IT budget reserved for agencywide IRM initiatives, such as geographical information systems (GIS) support and data standards. At the time of our review, the ESC had not reviewed or made recommendations on any major information system development efforts. Instead, the ESC focused its activity on spending funds allocated to it for agencywide IRM policy initiatives, such as intra-agency data standards. The ESC met on June 26, 1996, to assess the impact of ITMRA upon EPA’s IT management process. In conducting their selection processes, leading organizations assess and manage the different types of IT projects, such as mission-critical or infrastructure, at all different phases of their life cycle, in order to create a complete strategic investment portfolio. (See figure 2.2.) By scrutinizing and analyzing their entire IT portfolio, managers can examine the costs of maintaining existing systems versus investing in new ones. By continually and rigorously reevaluating the entire project portfolio based on mission priorities, organizations can reach decisions on systems based on overall contribution to organizational goals. Under ITMRA, agencies will need to compare and prioritize projects using explicit quantitative and qualitative decision criteria. At the federal agencies we studied, some prioritization of projects was conducted, but none made managerial trade-offs across all types of projects. IRS, NOAA, and the Coast Guard each conducted some type of portfolio analyses; EPA and NASA did not. Additionally, the portfolio analyses that were performed generally covered projects that were either high dollar, new, or under development. For example, in 1995 we reported that IRS executives were consistently maintaining that all 36 TSM projects, estimated to cost up to $10 billion through the year 2001, were equally important and must all be completed for the modernization to succeed.This approach, as well as the accompanying initial failure to rank the TSM projects according to their prioritized needs and mission performance improvements, has meant that IRS could not be sure that the most important projects were being developed first. Since our 1995 report, IRS has begun to rank and prioritize all of the proposed TSM projects using cost, risk, and return decision criteria. However, these decision criteria are largely qualitative, the data used for decisions were not validated or reliable, and analyses were not based on calculations of expected return on investment. In addition, according to IRS, its investment review board uses a separate process with different criteria for analyzing operational systems. IRS also said that the board does not review research and development (R&D) systems or field office systems. Using separate processes for some system types and not including all systems prevents IRS from making comparisons and trade-offs as part of a complete IT portfolio. Of all the agencies we reviewed, the Coast Guard had the most experience using a comprehensive selection phase. In 1991, the Coast Guard started a strategic information resources management process and shortly thereafter initiated an IT investment process. Under this investment process, a Coast Guard working group from the IRM office ranks and prioritizes new IT projects and those under development based on explicit risk and return decision criteria. A senior management board meets annually to rank the projects and decide on priorities. The Coast Guard has derived benefits from its project selection process. During the implementation of its IT investment process, the Coast Guard identified opportunities for systems consolidation. For example, the Coast Guard reported that five separate personnel systems are being incorporated into the Personnel Management Information System/Joint Military Pay System II for a cost avoidance of $10.2 million. The Coast Guard also identified other systems consolidation opportunities that, if implemented, could result in a total cost savings of $77.4 million. However, at the time of our review, the Coast Guard’s selection process was still incomplete. For example, R&D projects and operational systems were not included in the prioritization process. As a result, the Coast Guard could not make trade-offs between all types of proposed systems investments, creating a risk that new systems would be implemented that duplicate existing systems. Additionally, the Coast Guard was at risk of overemphasizing investments in one area, such as maintenance and enhancements for existing systems, at the expense of higher value investments in other areas, such as software applications development supporting multiple unit needs. Leading organizations continue to manage their investments once selection has occurred, maintaining a cycle of continual control and evaluation. Senior managers review the project at specific milestones as the project moves through its life cycle and as the dollar amounts spent on the project increase. (See figure 2.3.) At these milestones, the executives compare the expected costs, risks, and benefits of earlier phases with the actual costs incurred, risks encountered, and benefits realized to date. This enables senior executives to (1) identify and focus on managing high-potential or high-risk projects, (2) reevaluate investment decisions early in a project’s life cycle if problems arise, (3) be responsive to changing external and internal conditions in mission priorities and budgets, and (4) learn from past success and mistakes in order to make better decisions in the future. The level of management attention focused on each of the three investment phases is proportional based on such factors as the relative importance of each project in the portfolio, the relative project risks, and the relative number of projects in different phases of the system development process. The control phase focuses senior executive attention on ongoing projects to regularly monitor their interim progress against projected risks, cost, schedule, and performance. The control phase requires projects to be modified, continued, accelerated, or terminated based on the results of those assessments. In the evaluation phase, the attention is focused on implemented systems to give a final assessment of risks, costs, and returns. This assessment is then used to improve the selection of future projects. Similarly in the federal government, GPRA forces a shift in the focus of federal agencies—away from such traditional concerns as staffing and activity levels and towards one overriding issue: results. GPRA requires agencies to set goals, measure performance, and report on their accomplishments. Just as in leading organizations, GPRA, in concert with the CFO Act, is intended to bring a more disciplined, businesslike approach to the management of federal programs. The agencies we reviewed focused most of their resources and attention on selecting projects and gave less attention to controlling or evaluating those projects. While IRS, NASA, and NOAA had implemented control mechanisms, and IRS had developed a postimplementation review methodology, none of the agencies had complete and comprehensive control and evaluation processes in place. Specifically, in the five case study agencies we evaluated, we found that control mechanisms were driven primarily by cost and schedule concerns without any focus on quantitative performance measures, evaluations of actual versus projected returns were rarely conducted, and information and lessons learned in either the control or evaluation phases were not systematically fed back to the selection phase to improve the project selection process. Leading organizations maintain control of a project throughout its life cycle by regularly measuring its progress against not only projected cost and schedule estimates, but also quantitative performance measures, such as benefits realized or demonstrated in pilot projects to date. To do this, senior executives from the program, IRM, and financial units continually monitor projects and systems for progress and identify problems. When problems are identified, they take immediate action to resolve them, minimize their impact, or alter project expectations. Legislation now requires federal executives to conduct this type of rigorous project monitoring. With the passage of ITMRA, agencies are required to demonstrate, through performance measures, how well IT projects are improving agency operations and mission effectiveness. Senior managers are also to receive independently verifiable information on cost, technical and capability requirements, timeliness, and mission benefit data at project milestones. Furthermore, pursuant to the Federal Acquisition Streamlining Act of 1994 (Public Law 103-355), if a project deviates from cost, schedule, and performance goals, the agency head is required to conduct a timely review of the project and identify appropriate corrective action—to include project termination. Two of the agencies we reviewed—the Coast Guard and EPA—did not use management control processes that focused on IT systems projects. The other three agencies—IRS, NOAA, and NASA—had management control processes that focused primarily on schedule and cost concerns, but not interim evaluations of performance and results. Rarely did we find examples in which anticipated benefits were compared to results at critical project milestones. We also found few examples of lessons that were learned during the control phase being cycled back to improve the selection phase. To illustrate, both IRS and NASA used program control meetings (PCMs) to keep senior executives informed of the status of their major systems by requiring reports, in the form of self-assessments, from the project managers. However, these meetings did not focus on how projects were achieving interim, measurable improvement targets for quality, speed, and service that could form the basis for project decisions about major modifications or termination. IRS, for instance, used an implementation schedule to track different components of each of its major IT projects under TSM. Based on our discussions with IRS officials, the PCMs focused on factors bearing on real or potential changes in project costs or schedule. Actual, verified data on interim application or system testing results—compared to projected improvements in operational, mission improvements—were not evaluated. At NASA, senior program executives attended quarterly Program Management Council (PMC) meetings to be kept informed of major programs and projects and to take action when problems arose. While not focused exclusively on IT issues, the PCMs were part of a review process that looked at implementation issues of programs and projects that (1) were critical to fulfilling NASA’s mission, particularly those that were assigned to two or more field installations, (2) involved the allocation of significant resources, defined as projects whose life-cycle costs were over $200 million, or (3) warranted special management attention, including those that required external agency reporting on a regular basis. During the PMC meetings, senior executives reviewed self-assessments (grades of green, yellow, and red), done by the responsible project manager, on the cost, schedule, and technical progress of the project. Using this color-coded grading scheme, NASA’s control process focused largely on cost, schedule, and technical concerns, but not on assessing improvements to mission performance. Additionally, the grading scheme was not based on quantitative criteria, but instead was largely qualitative and subjective in nature. For instance, projects were given a “green” rating if they were “in good shape and on track consistent with the baseline.” A “yellow” rating was defined as a “concern that is expected to be resolved within the schedule and budget margins,” and a “red” rating was defined as “a serious problem that is likely to require a change in the baseline determined at the beginning of the project.” However, the lack of quantitative criteria, benefit analysis, and performance data invited the possibility for widely divergent interpretations and a misunderstanding of the true value of the projects under review. As of 1995, three IT systems had met NASA’s review criteria and had been reviewed by the PMC. These three systems constituted about 7 percent of NASA’s total fiscal year 1994 IT spending. No similar centralized review process existed for lower dollar projects, which could have resulted in problem projects and systems that collectively added up to significant costs being overlooked. For instance, in 1995 NASA terminated an automated accounting system project that had been under development for about 6 years, had cost about $45 million to date, and had an expected life-cycle cost of over $107 million. In responding to a draft of this report, the NASA CIO said that the current cost threshold of $200 million is being reduced to a lower level to ensure that most, if not all, agency IT projects will be subject to PMC reviews. In addition, the CIO noted that NASA’s internal policy directive on program/project management is being revised to (1) include IT evaluation criteria that are aligned with ITMRA and executive-branch guidance and (2) clearly establish the scope and levels of review (agency, lead center, or center) for IT investment decisions. Once projects have been implemented and become operational, leading organizations evaluate them to determine whether they have achieved the expected benefits, such as lowered cost, reduced cycle time, increased quality, or increased the speed of service delivery. They do this by conducting project postimplementation reviews (PIRs) to compare actual to planned cost, returns, and risks. The PIR results are used to calculate a final return on investment, determine whether any unanticipated modifications may be necessary to the system, and provide “lessons learned” input for changes to the organization’s IT investment processes and strategy. ITMRA now requires agencies to report to OMB on the performance benefits achieved by their IT investments and how those benefits support the accomplishment of agency goals. Only one of the five federal agencies we reviewed—IRS—systematically evaluated implemented IT projects to determine actual costs, benefits, and risks. Indeed, we found that most of the agencies rarely evaluated implemented IT projects at all. In general, the agency review programs were insufficiently staffed and used poorly defined and inconsistent approaches. In addition, in cases where evaluations were done, the findings were not used to consider improvements or revisions in the IT investment decision-making process. NOAA, for instance, had no systematic process in place to ensure that it was achieving the planned benefits from its annual $300 million IT expenditure. For example, of the four major IT projects that constitute the $4.5 billion National Weather Service (NWS) modernization effort, only the benefits actually accruing from one of four—the NEXRAD radars—had been analyzed. While not the only review mechanism used by the agency, NOAA’s central review program was poorly staffed. NOAA headquarters, with half a staff year devoted to this review program, generally conducted reviews in collaboration with other organizational units and had participated in only four IT reviews over the last 3 fiscal years. Additionally, these reviews generally did not address the systems’ projected versus actual cost, performance, and benefits. IRS had developed a PIR methodology that it used to conduct five systems postimplementation reviews. A standardized methodology is important because it makes the reviews consistent and adds rigor to the analytical steps used in the review process. The IRS used the June 1994 PIR on the Corporate Files On-Line (CFOL) system as the model for this standardized methodology. In December 1995, IRS used the PIR methodology to complete a review of the Service Center Recognition/Image Processing System (SCRIPS). Subsequently, three more PIRs have been completed (TAXLINK, the Enforcement Revenue Information System, and the Integrated Collection System) and five more are scheduled. IRS estimated that the five completed systems have an aggregate cost of about $845 million. However, the PIR methodology was not integrated into a cohesive investment process. Specifically, there were no mechanisms in place to take the lessons learned from the PIRs and apply them to the decision criteria and other tools and techniques used in their investment process. As a result, the PIRs that were conducted did not meet one of their primary objectives—to ensure continual improvement based on lessons learned—and IRS ran the risk of repeating past mistakes. To help make continual decisions on IT investments, leading organizations require all projects to have complete and up-to-date project information. This information includes cost and benefit data, risk assessments, implementation plans, and initial performance measures. (See figure 2.4). Maintaining this information allows senior managers to rigorously evaluate the current status of projects. In addition, it allows them to compare IT projects across the organization; consider continuation, delay, or cancellation trade-offs; and take action accordingly. ITMRA requires agencies to use quantitative and qualitative criteria to evaluate the risks and the returns of IT investments. As such, agencies need to collect and maintain accurate and reliable cost, benefit, risk, and performance data to support project selection and control decisions. The requirement for accurate, reliable, and up-to-date financial and programmatic information is also a primary requirement of the CFO Act and is essential to fulfilling agency requirements for evaluating program results and outcomes under GPRA. At the five case study agencies we evaluated, we found that, in general agency IT investment decisions were based on undefined or implicit data on the project’s cost, schedule, risks, and returns were not documented, defined, or kept up-to-date, and, in many cases, were not used to make investment decisions. To ensure that all projects and operational systems are treated consistently, leading organizations define explicit risk and return decision criteria. These criteria are then used to evaluate every IT project or system. Risk criteria involve managerial, technical, resource, skill, security, and organizational factors, such as the size and scope of the project, the extent of use of new technology, the potential effects on the user organization, the project’s technical complexity, and the project’s level of dependency on other systems or projects. Return criteria are measured in financial and nonfinancial terms. Financial measurements can include return on investment and internal rate of return analyses while nonfinancial assessments can include improvements in operational efficiency, reductions in cycle time, and progress in better meeting customer needs. Of the five agencies in our sample, only the Coast Guard used a complete set of decision criteria. These decision criteria included (1) risk assessments of schedule, cost, and technical feasibility dimensions, (2) cost-benefit impacts of the investment, (3) mission effectiveness measures, (4) degree of alignment with strategic goals and high-level interest (such as Congress or the President), and (5) organizational impact in the areas of personnel training, quality of work life, and increased scope of service. The Coast Guard used these criteria to prioritize IT projects and justify final selections. The decision criteria were weighted and scored, and projects were evaluated to determine those with the greatest potential to improve mission performance. Generally, officials in other agencies stated that they determine which projects to fund based on the judgmental expertise of decisionmakers involved in the process. NOAA, for instance, had a board of senior executives that met annually to determine budget decisions across seven strategic goals. Working groups for each strategic goal met and each created a prioritized funding list, which was then submitted to the executive decision-making board. These working groups did not have uniform criteria for selecting projects. The executive board accepted the prioritized lists as submitted and made funding threshold decisions based on these lists. As a result, the executive board could not easily make consistent, accurate trade-offs among the projects that were selected by these individual working groups on a repeatable basis. In addition, to maximize funding for a specific working group, project rankings may not have been based on true risk or return. According to a NOAA senior manager and the chair of one of the NOAA working groups, one group ranked high-visibility projects near the bottom of the list to encourage the senior decision-making board to draw the budgetary cut-off line below these high visibility projects. Few of these high-visibility projects were at the top of the list, despite being crucial to NOAA and high on the list of the NOAA Administrator’s priorities. Explicit decision criteria would eliminate this type of budgetary gamesmanship. Leading organizations consider project data the foundation by which they select, control, and evaluate their IT investments. Without it, participants in an investment process cannot determine the value of any one project. Leading organizations use rigorous and up-to-date cost-benefit analyses, risk assessments, sensitivity analyses, and project specific data including current costs, staffing, and performance, to make funding decisions and project modifications based, whenever possible, on quantifiable data. While the agencies in our sample developed documents in order to get project approvals, little effort was made to ensure that the information was kept accurate and up-to-date, and rarely were the data used to manage the project throughout its life cycle. During our review, we asked each agency to supply us with basic data on its largest dollar IT projects. However, this information was not readily available and gathering it required agency officials to rely on a variety of sometimes incomparable sources for system cost, life-cycle phase, and staffing levels. In addition, some of the agencies could not comparatively analyze IT projects because they did not keep a comprehensive accounting of data on all of the IT systems. For example, EPA had to conduct a special information collection to identify life-cycle cost estimates on its major systems and projects for this report. While the individual system managers at EPA did have system life-cycle cost estimates, the fact that this information was decentrally maintained made cross-system comparisons unlikely. In a 1995 report, the NASA IG found that neither NASA headquarters nor any of the NASA centers had a complete inventory of all information systems for which they were responsible. All of the agencies we reviewed conducted cost-benefit analyses for their major IT projects. However, these analyses were generally done to support decisions for project approval and were seldom kept current. In addition, the cost-benefit projections were rarely used to evaluate actual project results. The NWS modernization, for instance, has a cost-benefit analysis that was done in 1992. This analysis covers the four major systems under the modernization. To be effective, an analysis should include the costs and benefits of each project, alternatives to that project, and finally, a combined cost-benefit analysis for the entire modernization. However, the cost-benefit analysis that was conducted only compares the aggregate costs and benefits of the NWS modernization initiative against the current system. It does not assess or analyze the costs and benefits of each system, nor does it examine alternatives to those systems. As a result, NWS does not know if each of the modernization projects is cost-beneficial, and cannot make trade-offs among them. If using only this analysis, decision-makers are forced to choose either the status quo or all of the projects proposed under the modernization. Without updated cost-benefit data, informed management decisions become difficult. We reported in April 1995 that NWS was trying to assess user concerns related to the Automated Surface Observing System (ASOS), one of the NWS modernization projects, but that NWS did not have a complete estimate of what it would cost to address these concerns. As we concluded in the report, without reliable estimates of what an enhanced or supplemented ASOS would cost, it would be difficult for NWS to know whether continued investment in ASOS is cost-beneficial. We provided and discussed a draft of this report with officials from EPA, NASA, NOAA, IRS, and the Coast Guard, and have incorporated their comments where appropriate. Several of the agencies noted that they, in response to the issuance of OMB’s guidance on IT investment decision-making and the passage of ITMRA, have made process changes and organizational modifications affecting IT funding decisions. We have incorporated this information into the report where applicable. However, many of the process changes and modifications have occurred very recently, and we have not fully evaluated these changes or determined their effects. Officials from NOAA and NASA also had reservations about the applicability of the investment portfolio approach to their organizations because their decentralized operating environments were not conducive to a single agencywide portfolio model with a fixed set of criteria. Because any organization, whether centralized or decentralized, has to operate within the parameters of a finite budget, priorities must still be set, across the organization, about where limited IT dollars will be spent to achieve maximum mission benefits. We agree that many IT spending decisions can be made at the agency or program level. However, there are some decisions—especially those involving projects that are (1) high-risk, high-dollar, (2) cross-functional, (3) or providing a common infrastructure (e.g., telecommunications)—that should be made at a centralized, departmental level. Establishing a common, organizationwide focus, while still maintaining a flexible distribution of departmental and agency/program/site decision-making, can be achieved by implementing standard decision criteria. These criteria help ensure that projects are assessed and evaluated consistently at lower levels, while still maintaining an enterprisewide portfolio of IT investments. Buying information technology can be a high-risk, high-return undertaking that requires strong management commitment and a systematic process to ensure successful outcomes. By using an investment-driven management approach, leading organizations have significantly increased the realized return on information technology investments, reduced the risk of cost overruns and schedule delays, and made better decisions about how their limited IT dollar should be spent. Adopting such an investment-driven approach can provide federal agencies with similar opportunities to achieve greater benefits from their IT investments on a more consistent basis. However, the federal case study agencies we examined used decision-making processes that lacked many essential components associated with an investment approach. Critical weaknesses included the absence of reliable, quantitative cost figures, net return on investment calculations, rigorous decision criteria, and postimplementation project reviews. With sustained management attention and substantive improvements to existing processes, these agencies should be able to meet the investment-related provisions of ITMRA. Implementing and refining an IT investment process, however, is not an easy undertaking and cannot be accomplished overnight. Maximizing the returns and minimizing the risks on the billions of dollars that are spent each year for IT will require continued efforts on two fronts. First, agencies must fundamentally change how they select and manage their IT projects. They must develop and begin using a structured IT investment approach that encompasses all aspects of the investment process—selection, control, and evaluation. Second, oversight attention far beyond current levels must be given to agencies’ management processes and to actual results that are being produced. Such attention should include the development of policies and guidance as well as selective evaluations of processes and results. These evaluations should have a dual focus: They should identify and address deficiencies that are occurring, but they should also highlight positive results in order to share lessons learned and speed success. OMB’s established leadership role, as well as the policy development and oversight responsibilities that it was given under ITMRA, place it in a key position to provide such oversight. OMB has already initiated several changes to governmentwide guidance to encourage the investment approach to IT decision-making, and has drawn upon the assistance of several key interagency working groups comprised of senior agency officials. Such efforts should be continued and expanded, to ensure that the federal government gets the most return for its information technology investments. Given its significant leadership responsibility in supporting agencies’ improvement efforts and responding to requirements of ITMRA, it is imperative that OMB continue to clearly define expectations for agencies and for itself to successfully implement investment decision-making approaches. As such, we are recommending four specific actions for the Director of OMB to take. OMB’s first challenge is to help agencies improve their investment management processes. With effective processes in place, agencies should be in much stronger positions to make informed decisions about the relative benefits and risks of proposed IT spending. Without them, agencies will continue to be vulnerable to risks associated with excessively costly projects that produce questionable mission-related improvements. Under Sections 5112 and 5113 of the Information Technology Management Reform Act, the Director of OMB has responsibility for promoting and directing that federal agencies establish capital planning processes for information technology investment decisions. In designing governmentwide guidance for this process, we recommend that the Director of the Office of Management and Budget require agencies to: Implement IT investment decision-making processes that use explicitly defined, complete, and consistent criteria applied to all projects, regardless of whether project decisions are made at the departmental, bureau, or program level. With criteria that reflect cost, benefit, and risk considerations, applied consistently, agencies should be able to make more reasonable and better informed trade-offs between competing projects in order to achieve the maximum economic impact for their scarce investment dollars. Periodically analyze their entire portfolios of IT investments—at a minimum new projects, as well as projects in development and operations and maintenance expenditures—to determine which projects to approve, cancel or delay. With development and maintenance efforts competing directly with one another for funding, agencies will be better able to gauge the best proportion of investment in each category of spending to move away from their legacy bases of systems with excessive maintenance costs. Design control and evaluation processes that include cost, schedule, and quantitative performance assessments of projected versus actual improvement in mission outcomes. As a result, they should increase their capacity to both assess actual project results and learn from their experience which operational areas produce the highest returns and how well they estimate projects and deliver final results. Advise agencies in setting minimum quality standards for data used to assess (qualitatively and quantitatively) cost, benefit, and risks decisions on IT investments. Agencies should demonstrate that all IT funding proposals include only data meeting these quality requirements and that projected versus actual results are assessed at critical project milestones. The audited data required by the CFO Act should help produce this accurate, reliable cost information. Higher quality information should result in better and more consistent decisions on complex information systems investments. OMB’s second challenge is to use the results produced by the improved investment processes to develop recommendations for the President’s budget that reflect an agency’s actual track record in delivering mission performance for IT funds expended. Under Section 5113 of ITMRA, the Director of OMB is charged with evaluating the results of agency IT investments and enforcing accountability—including increases or reductions in agency IT funding proposals—through the annual budget process. In carrying out these responsibilities, we recommend that the Director of the Office of Management and Budget: Evaluate information system project cost, benefit, and risk data when analyzing the results of agency IT investments. Such analyses should produce agency track records that clearly and definitively show what improvements in mission performance have been achieved for the IT dollars expended. Ensure that the agency investment control process are in compliance with OMB’s governmentwide guidance, and if not, assess strengths and weaknesses and recommend actions and timetables for improvements. When results are questionable or difficult to determine, monitoring agency investment processes will help OMB diagnose problem causes by determining the degree of agency control and the quality of decisions being made. Use OMB’s evaluation of each agency’s IT investment control processes and IT performance results as a basis for recommended budget decisions to the President. This direct linkage should give agencies a strong, much needed incentive to maximize the returns and minimize the risks of their scarce IT investments. To effectively implement improved investment management processes and make the appropriate linkages between agency track records and budget recommendations, OMB also has a third challenge. It will need to marshal the resources and skills to execute the new types of analysis required to make sound investment decisions on agency portfolios. Specifically, we recommend that the Director of the Office of Management and Budget: Organize an interagency group comprised of budget, program, financial, and IT professionals to develop, refine and transfer guidance and knowledge on best practices in IT investment management. Such a core group can serve as an ongoing source of practical knowledge and experience on the state of the practice for the federal government. Obtain expertise on an advisory basis to assist these professionals in implementing complete and effective investment management systems. Agency senior IRM management could benefit greatly from a high quality, easily accessible means to solicit advice from capital planning and investment experts outside the federal government. Identify the type and amount of skills required for OMB to execute IT portfolio analyses, determine the degree to which these needs are currently satisfied, specify the gap and both design and implement a plan, with timeframes and goals, to close the gap. Given existing workloads and the resilience of the OMB culture, without a determined effort to build the necessary skills, OMB will have little impact on the quality of IT investment decision-making. If necessary to augment its own staff resources, OMB should consider the option of obtaining outside support to help perform such assessments. Finally, as part of its internal implementation strategy, the Director of the Office of Management and Budget should consider developing an approach to assessing OMB’s own performance in executing oversight responsibilities under the ITMRA capital planning and investment provisions. Such a process could focus on whether OMB reviews of agency processes and results have an impact on reducing risk or increasing the returns on information technology investments—both within and across federal agencies. In its written comments on a draft of our report, OMB generally supported our recommendations and said that it is working towards implementing many aspects of the recommendations as part of the fiscal year 1998 budget review process of fixed capital assets. OMB also provided observations or suggestions in two additional areas. First, OMB stated that given ITMRA’s emphasis on agencies being responsible for IT investment results, it did not plan to validate or verify that each agency’s investment control process is in compliance with OMB’s guidance contained in its management circulars. As discussed in our more detailed evaluation of OMB’s comments in appendix I, conducting selective evaluations is an important aspect of an overall oversight and leadership role because it can help identify management deficiencies that are contributing to poor IT investment results. Second, OMB noted that the relationship of IT investment processes between a Cabinet department and bureaus or agencies within the department was not fully evaluated and that additional attention would be needed as more data on this issue become available. We agree that our focus was on assessing agencywide processes and that continued attention to the relationships between departments, bureaus, and agencies will contribute to increased understanding across the government and will ultimately improve ITMRA’s chances of success. This issue is discussed in more detail in our response to comments provided by the five agencies we reviewed (summarized at the end of chapter 2). The following are GAO’s comments on the Office of Management and Budget’s letter dated July 26, 1996. 1. As stated in the scope and methodology section of the report, we focused our analysis on agencywide processes. We agree that continued attention to this issue will contribute to increased understanding across the government and will ultimately improve ITMRA’s chances of success. As noted in our response to comments received from the agencies we reviewed (provided at the end of chapter 2), we believe that a flexible distribution of departmental and agency/program/site IT decision-making is possible and can best be achieved by implementing standard decision criteria for all projects. In addition, we note that particular types of IT decisions, such as those with unusually high-risk, cross-functional impact or that provide common infrastructure needs, are more appropriately decided at a centralized, departmental level. Experience gained during implementation of the Chief Financial Officers (CFO) Act showed that departmental-level CFOs needed time to build effective working relationships with their agency- or bureau-level counterparts. We believe the same will be true for Chief Information Officers (CIOs) established by ITMRA and that establishing and maintaining this bureau-level focus will be integral for ensuring the act’s success. 2. ITMRA does squarely place responsibility and accountability for IT investment results with the head of each agency. Nevertheless, ITMRA clearly requires that OMB provide a key policy leadership and implementation oversight role. While we agree that it may not be feasible to validate and verify every agency’s investment processes, it is still essential that selected evaluations be conducted on a regular basis. These evaluations can effectively support OMB’s performance and results-based approach. They can help to identify and understand problems that are contributing to poor investment outcomes and also help perpetuate success by providing increased learning and sharing about what is and is not working. In order to develop a profile of each agency’s IT environment, we asked the agencies to provide us information on the following: total IT expenditures for fiscal year 1990 through fiscal year 1994; total number of staff devoted to IRM functions and activities for fiscal year 1990 through fiscal year 1994; and costs for the 10 largest IT projects for fiscal year 1994 (as measured by total project life-cycle cost). To gather this information, we developed a data collection instrument and submitted it to responsible agency officials. Information supplied by the agencies is summarized in the following tables. We did not independently verify the accuracy of this information. Moreover, comparison of figures across the agencies is difficult because agency officials used different sources (such as budget data, IRM strategic plans, etc.) for the same data elements. U.D. Provides an organizationwide microcomputer infrastructure and is the primary source for acquiring desktop, server and portable hardware; operating system and office automation system software; utilities and peripherals, training, personnel support, and cabling. Op. Provides continued support for the Coast Guard’s existing microcomputer infrastructure. Op. Provides a consolidated accounting and pay system. U.D. A configuration of sensors, communication links, personnel, and decision support tools that will modernize and expand the systems in three cities by incorporating radar sensor information overlaid on digital nautical charts as well as improved decision support systems. U.D. Provides an automated and consolidated communication system. U.D. Merges two maintenance systems for tracking and recording scheduled aviation maintenance actions. U.D. Reprograms most of the existing Coast Guard developed applications to comply with the National Institute of Standards and Technology’s Application Portability Profile. Op. Provides safety performance histories of vessels and involved parties and is used as a decision support tool for the Commercial Vessel Safety program. Op. Provides aviation technical publications in electronic format. U.D. Consolidated into the Coast Guard Standard Workstation III system. Op. Performs funds control from commitments through payment; updates all ledgers and tables as transactions are processed; provides a standard means of data entry, edit, and inquiry; and provides a single set of reference and control files. Op. Contains data submitted to EPA under the Emergency Planning and Community Right to Know Act for chemicals and chemical categories listed by the agency. Data include chemical identity, amount of on-site users, release and off-site transfers, on-site treatment, minimization/prevention actions. Public access is provided by the National Library of Medicine. Op. Supports management and administration of chemical samples from Superfund sites that are analyzed under agency contracts with chemical laboratories. The system schedules and tracks samples from site collection, through analysis, to delivery to the agency. Op. Stores air quality, point source emissions, and area/mobile source data required by federal regulations from the 50 states. Op. Superfund’s official source of planning and accomplishment data. Serves as the primary basis for strategic decision-making and site-by-site tracking of cleanup activities. Op. Contains a set of computer applications and a major relational database which is used to support regulation development, air quality analysis, compliance audits, investigations, assembly line testing, in-use compliance, legislation development, and environmental initiatives. Op. Maintains basic data identifying and describing hazardous waste handlers; detailed information about hazardous waste treatment storage and disposal processes, environmental permitting, information on inspections, violations, and enforcement actions; and tracks specific corrective action information needed to regulate facilities with hazardous waste releases. Op. Supports the National Pollutant Discharge Elimination System, a Clean Water Act program that issues permits and tracks facilities that discharge pollutants into our navigable waters. (continued) U.D. A replacement for the existing Comprehensive Environmental Response, Compensation, and Liability Information System described above. Op. A PC LAN version of the Comprehensive Environmental Response, Compensation, and Liability Information System database used by EPA regional offices for data input and local analysis needs. U.D. Acquire and install Tax System Modernization host-tier computers at three computing centers. U.D. Integrates five systems that control, assign, prioritize, and track taxpayer inquiries; provides office automation, case folder review and inventories, and display and manipulation of case inquiry folders; automates collection cases; provide access to current tax return information; automates case preparation and closure; and provides standardized hardware and custom software to the criminal investigation function on a nationwide basis. U.D. Integrates six systems that will receive and control information being transmitted to or from IRS; automates remittance processing activities; scans paper tax returns and correspondence for processing in an automated database; provides automated telephone assistance to customers; permits individual and business tax returns to be filed by utilizing a touch-tone phone; and provides access to all electronically filed returns that have been scored for potential fraud. Op. Provides case tracking, expanded legal research, a document management system for briefs, an integrated office system, time reporting, issue tracking, litigation support, and a decision support system. (continued) U.D. Integrates three systems that provide application programs to query, search, update, analyze and extract information from a database; aggregates tax information into electronic case folders and distributes them to field locations; and provides the security infrastructure to support all components of the Tax System Modernization. U.D. Provides a variety of workstation models, monitors, printers, operating systems and related equipment; provides for standardization of the small and medium-scale computers used by front line programs in the national and field offices and service centers. Op. Provides funding for (1) the mainframe and miscellaneous peripherals at each service center, (2) magnetic media and ADP supplies for all service centers, (3) lease and maintenance for support equipment, and (4) on-line access to taxpayer information and account status. U.D. Provides an interim hardware platform at two computing centers to support master file processing and full implementation of the CFOL data retrieval/delivery system. U.D. Provides upgradable software development workstations and workbench tools, including automated analysis and design tools; requirements traceability tools; construction kits with smart editors, compilers, animators, and debuggers; and static analyzers. U.D. Integrates four systems that provide for ordering and delivery of telecommunication systems and services for Treasury bureaus; serves as a Government Open Systems Interconnection Profile prototype; provides centralized network and operations management and will acquire about 14,000 workstations. U.D. Receives, processes, archives, and distributes earth science research data from U.S., European, and Japanese polar platforms, selected Earth probes, the Synthetic Aperture Radar free flyer, selected existing databases, and other sources of related data. Op. Provides telecommunications and computation services for Marshall Space Flight Center. Op. Supports most data systems, networks, user workstations and telecommunications systems and provides maintenance, operations, software development, engineering, and customer support functions at Johnson Space Center. Op. Provides a family of compatible computing systems covering a broad performance range that will provide ground-based mission operations systems support. Op. Op. Provides continuity of base operations, including federal information processing resources of sustaining engineering, computer operations, and communications services for Kennedy Space Center. Op. Acquisition of seven classes of scientific and engineering workstations plus supporting equipment. Op. Furnishes, installs, and tests the Advanced Computer Generated Image System; provides direct computational analysis and programming support to specific research disciplines and flight projects; provides for the analysis, programming, engineering, and maintenance services for the flight simulation facilities. Also provides support for the Central Scientific and Computing Complex operation and systems maintenance as well as Complex-wide communications systems support and system administration of distributed computing and data reduction systems. Op. Provides a wide array of supporting services, including computational, professional, technical, administrative, engineering, and operations at the Lewis Research Center. (continued) U.D. U.D. An information system including workstations, associated data processing, and communications, designed to integrate data from several National Weather Service information systems, as well as from field offices, regional and national centers, and other sources. Op. An initiative to acquire supercomputers necessary to run large complex numeric models as a key component of the weather forecast system. Op. A distributed-processing system architecture designed to acquire, process, and distribute satellite data and products. Op. An effort to replace a variety of obsolete technology in the National Marine Fisheries Service with a common computing infrastructure that supports distributed processing in an open system environment. The system stores, integrates, analyzes, and disseminates large quantities of living marine resource data. Op. Procurement of a high-performance computer system to provide support services for climate and weather research activities. Geostationary Operational Environmental Satellite (GOES I-M) Op. Ground system consisting of minicomputers with associated peripherals and satellite-dependent customized applications software to provide the monitoring, supervision, and data acquisition and processing functions for the GOES-Next satellites. Op. A system designed to support weather radars and associated display systems. (continued) Op. An effort to replace old mainframes as well as the associated channel-connected architecture with an open systems architecture. Op. Ground system consisting of minicomputers with associated peripherals and satellite-dependent customized applications software intended to provide the monitoring, supervision, and data acquisition and processing functions for the polar satellites. Imp. A system of sensors, computers, display units, and communications equipment to automatically collect and process basic data on surface weather conditions, including temperature, pressure, wind, visibility, clouds, and precipitation. This appendix is a compilation of work done by OMB and us on how federal agencies should manage information systems using an investment process. It is based upon analysis of the IT management best practices found in leading private and public sector organizations and is explained in greater detail in OMB’s Evaluating Information Technology Investments: A Practical Guide. How do you know you have selected the best projects? Based on your evaluation, did the systems deliver what you expected? Key Question: How can you select the right mix of IT projects that best meets mission needs and improvement priorities? The goal of the selection phase is to assess and prioritize current and proposed IT projects and then create a portfolio of IT projects. In doing so, this phase helps ensure that the organization (1) selects those IT projects that will best support mission needs and (2) identifies and analyzes a project’s risks and returns before spending a significant amount of project funds. A critical element of this phase is that a group of senior executives makes project selection and prioritization decisions based on a consistent set of decision criteria that compares costs, benefits, risks, and potential returns of the various IT projects. Initially filter and screen IT projects for explicit links to mission needs and program performance improvement targets using a standard set of decision criteria. Analyze the most accurate and up-to-date cost, benefit, risk, and return information in detail for each project. Create a ranked list of prioritized projects. Determine the most appropriate mix of IT projects (new versus operational, strategic versus maintenance, etc.) to serve as the portfolio of IT investments. An executive management team that makes funding decisions based on comparisons and trade-offs between competing project proposals, especially for those projects expected to have organizationwide impact. A documented and defined set of decision criteria that examines expected return on investment (ROI), technical risks, improvement to program effectiveness, customer impact, and project size and scope. Predefined dollar thresholds and authority levels that recognize the need to channel project evaluations and decisions to appropriate management levels to accommodate unit-specific versus agency-level needs. Minimal acceptable ROI hurdle rates that apply to projets across the organization that must be met for projects to be considered for funding. Risk assessments that expose potential technical and managerial weaknesses. Key Question: What controls are you using to ensure that the selected projects deliver the projected benefits at the right time and the right price? Once the IT projects have been selected, senior executives periodically assess the progress of the projects against their projected cost, schedule, milestones, and expected mission benefits. The type and frequency of the reviews associated with this monitoring activity are usually based on the analysis of risk, complexity, and cost that went into selecting the project and that are performed at critical project milestones. If a project is late, over cost, or not meeting performance expectations, senior executives decide whether it should be continued, modified, or canceled. Steps of the Control Phase Use a set of performance measures to monitor the developmental progress for each IT project to identify problems. Take action to correct discovered problems. Established processes that involve senior managers in ongoing reviews and force decisive action steps to address problems early in the process. Explicit cost, schedule, and performance measures to monitor expected versus actual project outcomes. An information system to collect project cost, schedule, and performance data, in order to create a record of progress for each project. Incentives for exposing and solving project problems. Key Question: Based on your evaluation, did the system deliver what was expected? The evaluation phase provides a mechanism for constantly improving the organization’s IT investment process. The goal of this phase is to measure, analyze, and record results, based on the data collected throughout each phase. Senior executives assess the degree to which each project met its planned cost and schedule goals and fulfilled its projected contribution to the organization’s mission. The primary tool in this phase is the postimplementation review (PIR), which should be conducted once a project has been completed. PIRs help senior managers assess whether a project’s proposed benefits were achieved and refine the IT selection criteria. Compare actual project costs, benefits, risks, and return information against earlier projections. Determine the causes of any differences between planned and actual results. For each system in operation, decide whether it should continue operating without adjustment, be further modified to improve performance, or be canceled. Modify the organization’s investment process based on lessons learned. Postimplementation reviews to determine actual costs, benefits, risks, and return. Modification of decision criteria and investment management processes, based on lessons learned, to improve the process. Maintenance of accountability by measuring actual project performance and creating incentives for even better project management in the future. The following sections briefly describe the information technology management processes at each of the five agencies we reviewed. These descriptions are intended to characterize the general workings of the agency processes at the time of our review. We used the selection/control/evaluation model (as summarized in appendix III and described in detail in OMB’s Evaluating Information Technology Investments: A Practical Guide) as a template for describing each agency’s IT management process. The Coast Guard had an IT investment process used to select IT projects for funding. IT project proposals were screened, evaluated, and ranked by a group of senior IRM managers using explicit decision criteria that took into account project costs, expected benefits, and risk assessments. The ranked list with recommended levels of funding for each project was submitted for review to a board of senior Coast Guard officers and then forwarded to the Coast Guard Chief of Staff for final approval. EPA used a decentralized IT project initiation, selection, and funding process. Under this broad process, program offices independently selected and funded IT projects on a case-by-case basis as the need for the system was identified. EPA had IRM policy and guidance for IT project data and analysis requirements—such as a project-level risk assessment and a cost-benefit study—that the program offices had to identify in order to proceed with system development. EPA did not have a consistent set of decision criteria for selecting IT projects. IT selection and funding activities within IRS differed depending on whether the project was part of the Tax System Modernization (TSM) or an operational system. In 1995, IRS created a senior-level board for selecting, controlling, and evaluating information technology investments and began to rank all of the proposed TSM projects using its cost, risk, and return decision criteria. However, these criteria were largely qualitative, data used were not validated or reliable, and the analyses were not based on calculations of expected return on investment. According to IRS, its investment review board used a separate process with different criteria for evaluating operational systems. The board did not review research and development systems or field office systems. IRS did not compare the results of its different evaluation processes. Within NASA, IT project selection and funding decisions were made by domain-specific program managers. NASA had two general types of IT funding—program expenditures and administrative spending. Most of NASA’s IT funding was embedded within program-specific budgets. Managers of these programs had autonomy to make system-level and system support IT selection decisions. Administrative IT systems were generally managed by the cognizant NASA program office or center. NASA has recently established a CIO council to establish high-level policies and standards, approve information resources management plans, and address issues and initiatives. The council will also serve as the IT capital investment advisory group to the proposed NASA Capital Investment Council. NASA plans for this Capital Investment Council to have responsibility for looking at all capital investments across NASA, including those for IT. While this Capital Investment Council may fill the need for identifying cross-functional opportunities, it is not yet operational. IT project selection and funding decisions at NOAA were made as part of its strategic management and budgeting process. NOAA had seven work teams—each supporting a NOAA strategic goal—that prioritized incoming funding requests. Managers on these work teams negotiated to determine IT project funding priorities within the scope of their respective strategic goals. These prioritization requests were then submitted to NOAA’s Executive Management Board, which had final agency decision authority over all expenditures. A key decision criterion used by the work teams was the project’s contribution to the agency’s strategic goals; however, no standard set of decision criteria was used in the prioritization decisions. Other data, such as cost-benefit analyses, were also sometimes used to evaluate IT project proposals, although use of these data sources was not mandatory. The Coast Guard conducted internal system reviews, but these reviews were not used to monitor the progress of IT projects. The review efforts were designed to address ways to improve efficiency, reduce project cost, and reduce project risk. Cost, benefit, and schedule data were also collected annually for some new IT projects, but the Coast Guard did not measure mission benefits derived from each of its projects. EPA had a decentralized managerial review process for monitoring IT projects. EPA’s IRM policy set requirements for the minimum level of review activity that program offices had to conduct, but program offices had primary responsibility for overseeing the progress of their IT projects. In an effort to provide a forum for senior managerial review of IT projects, EPA, in 1994, created the Executive Steering Committee (ESC) for IRM to guide EPA’s agencywide IRM activities. The ESC was chartered to review IRM projects that are large, important, or cross-organizational. The committee’s first major system review was scheduled for some time in 1996. EPA is currently formulating the data submission requirements for the ESC reviews. IRS regularly conducted senior management program control meetings (PCM) to review the cost and schedule activity of TSM projects. IRS had two types of PCMs. The four TSM sites—Submission Processing, Computing Center, Customer Service, and District Office—conducted PCMs to monitor the TSM activity under their purview. Also, IRS could hold “combined PCMs” to resolve issues that spanned across the TSM sites. IRS did not conduct PCMs to monitor the performance of operational systems. To date, (1) working procedures, (2) required decision documents, (3) reliable cost, benefit, and return data, (4) and explicit quantitative decision criteria needed for an effective investment control process are not in place for the IRS Investment Review Board. NASA senior executives regularly reviewed the cost and schedule performance of major programs and projects, but they reviewed only the largest IT projects. No central IRM review has been conducted since 1993. NASA put senior-level CIOs in place for each NASA center, but these CIOs exercised limited control over mission-related systems and had limited authority to enforce IT standards or architecture policies. NASA’s proposed Capital Investment Council, which is intended to supplement the Program Management Council by reviewing major capital investments, may address this concern once the Investment Council is operational. NOAA conducted quarterly senior-level program status meetings to review the progress and performance of major systems and programs, such as those in the NWS modernization. NOAA had defined performance measures to gauge the progress toward its strategic goals, but did not have specific performance measures for individual IT systems. Also, while some offices had made limited comparisons of actual to expected IT project benefits, NOAA did not require the collection or assessment of mission benefit accrual information on IT projects. The Coast Guard did not conduct any postimplementation reviews of IT projects. Instead the Coast Guard focused its review activity on systems that were currently under development. EPA did not conduct any centralized postimplementation reviews. EPA did conduct postimplementation reviews as part of the General Services Administration’s (GSA) triennial review requirement, but curtailed this activity in 1992 when the GSA requirement was lifted. IRS directives required that postimplementation reviews be conducted 6 months after an IT system is implemented. At the time of our review, IRS had conducted five postimplementation reviews and had developed a standard postimplementation review methodology. However, no mechanisms were in place to ensure that the results of these IRS investment evaluation reviews were used to modify the IRS selection and control decision-making processes or alter funding decisions for individual projects. NASA did not conduct or require any centralized project postimplementation reviews. NASA stopped conducting centralized IRM reviews in 1993 and now instead urges programs to conduct IRM self-assessments. While the agency conducted other reviews, NOAA’s IRM office has participated in only four IRM reviews over the last 3 years. These reviews tended to focus on specific IT problems, such as evaluating the merits of electronic bulletin board systems or difficulties being encountered digitizing nautical navigation maps. No postimplementation reviews had been conducted over the past 3 years. On February 10, 1996, the Information Technology Management Reform Act of 1996 (Division E of Public Law 104-106) was signed into law. This appendix is a summary of the information technology investment-related provisions from this act, it is not the actual language contained in the law. Information technology (IT) is defined as any equipment, or interconnected system or subsystem of equipment, that is used in the automatic acquisition, storage, manipulation, management, movement, control, display, switching, interchange, transmission, or reception of data or information. It may include equipment used by contractors. The OMB Director is to promote and be responsible for improving the acquisition, use, and disposal of IT by federal agencies The OMB Director is to develop a process (as part of the budget process) for analyzing, tracking, and evaluating the risks and results of major capital investments for information systems; the process shall include explicit criteria for analyzing the projected and actual costs, benefits, and risks associated with the investments over the life of each system. The OMB Director is to report to the Congress (at the same time the budget is submitted) on the net program performance benefits achieved by major capital investments in information systems and how the benefits relate to the accomplishment of agency goals. The OMB Director shall designate (as appropriate) agency heads as executive agents to acquire IT for governmentwide use. The OMB Director shall encourage agencies to develop and use “best practices” in acquiring IT. The OMB Director shall direct that agency heads (1) establish effective and efficient capital planning processes for selecting, managing, and evaluating information systems investments, (2) before investing in new information systems, determine whether a government function should be performed by the private sector, the government, or government contractor, and (3) analyze their agencys’ missions and revise the mission-related and administrative processes (as appropriate) before making significant investments in IT. Through the budget process, the OMB Director is to review selected agency IRM activities to determine the efficiency and effectiveness of IT investments in improving agency performance. Agency heads are to design and implement a process for maximizing the value and assessing and managing the risks of IT investments. The agency process is to (1) provide for the selection, management, and evaluation of IT investments, (2) be integrated with the processes for making budget, financial, and program management decisions, (3) include minimum criteria for selecting IT investments and specific quantitative and qualitative criteria for comparing and prioritizing projects, (4) provide for identifying potential IT investments that would result in shared benefits with other federal, state, or local governments, (5) provide for identifying quantifiable measurements for determining the net benefits and risks of IT investments, and (6) provide the means for senior agency managers to obtain timely development progress information, including a system of milestones for measuring progress, on an independently verifiable basis, in terms of cost, capability of the system to meet specified requirements, timeliness, and quality. Agency heads are to ensure that performance measurements are prescribed for IT and that the performance measurements measure how well the IT supports agency programs. (continued) Where comparable processes and organizations exist in either the public or private sectors, agency heads are to quantitatively benchmark agency process performance against such processes in terms of cost, speed, productivity, and quality of outputs and outcomes. Agency heads may acquire IT as authorized by law (the Brooks Act—40 U. S. C. 759—is repealed by sec. 5101) except that the GSA Administrator will continue to manage the FTS 2000 and follow-on to that program (sec. 5124(b)). Agency heads are to designate Chief Information Officers (in lieu of designating IRM officials—as a result of amending the Paperwork Reduction Act appointment provision). Agency Chief Information Officers (CIOs) are responsible for (1) providing advice and assistance to agency heads and senior management to ensure that IT is acquired and information resources are managed in a manner that implements the policies and procedures of the Information Technology Management Reform Act of 1996, is consistent with the Paperwork Reduction Act, and is consistent with the priorities established by the agency head, (2) developing, maintaining, and facilitating the implementation of a sound and integrated agency IT architecture, and (3) promoting effective and efficient design and operation of major IRM processes. Agency heads (in consultation with the CIO and CFO) are to establish policies and procedures that (1) ensure accounting, financial, and asset management systems and other information systems are designed, developed, maintained, and used effectively to provide financial or program performance data for agency financial statements, (2) ensure that financial and related program performance data are provided to agency financial management systems on a reliable, consistent, and timely basis, and (3) ensure that financial statements support the assessment and revision of agency mission-related and administrative processes and the measurement of performance of agency investments in information systems. Agency heads are to identify (in their IRM plans required under the Paperwork Reduction Act) major IT acquisition programs that have significantly deviated from the cost, performance, or schedule goals established for the program (the goals are to be established under title V of the Federal Acquisition Streamlining Act of 1994). This section establishes which provisions of the title apply to “national security systems.” “National security systems” are defined as any telecommunications or information system operated by the United States government that (1) involves intelligence activities, (2) involves cryptologic activities related to national security, (3) involves command and control of military forces, (4) involves equipment that is an integral part of a weapon or weapon system, or (5) is critical to the direct fulfillment of military or intelligence missions. This section requires the GSA Administrator to provide (through the Federal Acquisition Computer Network established under the Federal Acquisition Streamlining Act of 1994 or another automated system) not later than January 1, 1998, governmentwide on-line computer access to information on products and services available for ordering under the multiple award schedules. The Information Technology Management Reform Act takes effect 180 days from the date of enactment (February 10, 1996). David McClure, Assistant Director Danny R. Latta, Adviser Alicia Wright, Senior Business Process Analyst Bill Dunahay, Senior Evaluator John Rehberger, Information Systems Analyst Shane Hartzler, Business Process Analyst Eugene Kudla, Staff Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the: (1) information technology (IT) investment practices of several federal agencies and compared them to those used by leading private- and public-sector organizations; and (2) Office of Management and Budget (OMB) as it responds to the investment requirements of the Information Technology Management Reform Act (ITMRA). GAO found that: (1) leading private- and public-sector organizations manage their IT projects as investments and rank projects based on maximizing returns and minimizing risks; (2) the five federal agencies reviewed have some elements of an IT investment process in place, but they lack a complete, institutionalized approach that would fulfill the requirements of ITMRA or the Paper Reduction Act; (3) the agencies reviewed need to manage their IT projects as an investment portfolio, viewing each project as a competing investment and make decisions based on the project's overall contribution to the agency's goals; (4) the agencies do not conduct postimplementation reviews (PIR) to determine actual costs, returns, and risks; (5) agency IT decisions are based on inconsistent or inaccurate data; (6) with the exception of the Coast Guard, none of the agencies reviewed have a set of explicit criteria for making IT decisions; and (7) OMB has taken a proactive role in developing IT investment policy to assist federal agencies in implementing ITMRA requirements.
The federal government began with a debt of about $75 million in 1790. In February 1941, the Congress and the President enacted a law that set an overall limit of $65 billion on Treasury debt obligations that could be outstanding at any one time. The law was amended to raise the debt ceiling several times between February 1941 and June 1946. The ceiling established in June 1946, $275 billion, remained in effect until August 1954. At that time, the first temporary debt ceiling was enacted, which added $6 billion to the $275 billion permanent ceiling. The Congress and the President have enacted numerous temporary and permanent increases in the debt ceiling. As shown in figure 1, the amount of outstanding debt subject to the debt ceiling has increased from $1.6 trillion on September 30, 1984, to $6.7 trillion on September 30, 2003. The total amount of debt subject to the debt ceiling as of January 31, 2003, the month before Treasury entered into the 2003 debt issuance suspension period, was about $6.4 trillion. About 44 percent, or $2.8 trillion, was held by federal government accounts with investment authority, such as the Social Security trust funds, the Civil Service Retirement and Disability Trust Fund (Civil Service fund), the Exchange Stabilization Fund (ESF), and the Government Securities Investment Fund of the Federal Employees’ Retirement System (G-Fund). The remaining $3.6 trillion represents marketable and nonmarketable obligations held by the public. The Secretary of the Treasury has several responsibilities related to the federal government’s financial management operations, including paying the government’s obligations and investing receipts of federal government accounts with investment authority not needed for current benefits and expenses. To meet these responsibilities, the Secretary of the Treasury is authorized by law to issue the necessary obligations to federal government accounts with investment authority for investment purposes and to borrow the necessary funds from the public to pay government obligations. Under normal circumstances, the debt ceiling is not an impediment to carrying out these responsibilities. Treasury is notified by the appropriate agency (such as the Office of Personnel Management for the Civil Service fund) of the amount that should be invested (or reinvested), and Treasury makes the investment. In some cases, the agency may also specify the obligation that Treasury should purchase. The Treasury obligations issued to federal government accounts with investment authority count against the debt ceiling. If these accounts’ receipts are not invested, the amount of debt subject to the debt ceiling does not increase. We have previously reported on aspects of Treasury’s actions during the 2002 debt issuance suspension period and the 1995/1996 and other debt ceiling crises (see Related GAO Products). When Treasury is unable to borrow because the debt ceiling has been reached, the Secretary of the Treasury is unable to fully discharge his financial management responsibilities using normal methods. In 1985, the federal government experienced a debt ceiling crisis from September 3 through December 11. During that period, Treasury took several actions that were similar to those discussed later in this report. For example, Treasury redeemed Treasury obligations held by the Civil Service fund earlier than normal in order to borrow sufficient cash from the public to meet the fund’s benefit payments and did not invest some of the fund’s receipts. In 1986 and 1987, after Treasury’s experiences during prior debt ceiling crises, the Congress enacted several authorities authorizing the Secretary of the Treasury to use the Civil Service fund and the G-Fund to help Treasury manage its financial operations during a debt ceiling crisis. Those authorities, which Treasury used during the 2003 debt issuance suspension period, addressed (1) redemption of Civil Service fund obligations, (2) suspension of Civil Service fund investments, and (3) suspension of G-Fund investments. 1. Redemption of obligations held by the Civil Service fund. Subsection 8348(k) of title 5, United States Code, authorizes the Secretary of the Treasury to redeem obligations or other invested assets of the Civil Service fund before maturity to prevent the amount of public debt from exceeding the debt ceiling. The Secretary of the Treasury must determine that a debt issuance suspension period exists in order to redeem Civil Service fund obligations early. The statute authorizing the debt issuance suspension period and its legislative history are silent as to how the Secretary of the Treasury should determine the length of a debt issuance suspension period. 2. Suspension of Civil Service fund investments. Subsection 8348(j) of title 5, United States Code, authorizes the Secretary of the Treasury to suspend additional investment of amounts in the Civil Service fund if the investment cannot be made without causing the amount of public debt to exceed the debt ceiling. Subsection (j) also authorizes the Secretary of the Treasury to make the Civil Service fund whole after the debt issuance suspension period has ended. 3. Suspension of G-Fund investments. Subsection 8438(g) of title 5, United States Code, authorizes the Secretary of the Treasury to suspend the issuance of additional amounts of obligations of the United States to the G-Fund if issuance cannot occur without causing the amount of public debt to exceed the debt ceiling. Subsection (g) also authorizes the Secretary of the Treasury to make the G-Fund whole after the debt issuance suspension period has ended. During the 2003 debt issuance suspension period, Treasury relied upon authorities in addition to those mentioned above to help manage the amount of debt subject to the debt ceiling. Treasury has also relied on these other authorities during prior periods when it needed to take special actions to avoid exceeding the debt ceiling. Section 5302 of title 31, United States Code, authorizes the Secretary of the Treasury to determine when and if excess funds for ESF will be invested. During previous debt ceiling difficulties, Treasury used this authority to suspend reinvestment of maturing ESF investments to ensure that the debt ceiling was not exceeded. In addition to obligations issued under subsection 8348(d) of title 5, United States Code, other obligations are lawful investments by the Civil Service fund. For example, subsection 8348(e) of title 5, United States Code, authorizes the Secretary of the Treasury to invest surplus Civil Service funds in other interest-bearing obligations of the United States or obligations guaranteed as to both principal and interest by the United States, if the Secretary of the Treasury determines that the purchases are in the public interest. Further, obligations issued by other agencies, such as the Tennessee Valley Authority, the United States Postal Service, and the Federal Financing Bank (FFB), are lawful investments for all fiduciary, trust, and public funds whose investments are under the control of the United States, and such obligations are suitable investments for the Civil Service Fund. Treasury relied on such authorities during the 1985 and 1995/1996 debt ceiling crises to exchange obligations issued (commonly referred to as FFB 9(a) obligations) or held by FFB that were not subject to the debt ceiling for Treasury obligations held by the Civil Service fund that were subject to the debt ceiling. In addition to the authorities previously discussed, Treasury has on occasion received special authorities that pertained to specific situations. These special authorities are discussed in our report on the 1995/1996 debt ceiling crisis. Gains and losses associated with federal government accounts with investment authority and Treasury’s general fund can occur for a variety of reasons. For example, (1) the type of obligation held may be more susceptible to changes in interest rates and (2) the procedures used to make adjustments can have significant consequences for an account’s earnings. Whether these gains and losses affect an account’s recipients depends on whether the fund balance is used to determine recipients’ benefits. One example where the fund balance has a direct impact on participants is the G-Fund. Specifically, G-Fund earnings are directly related to the amount that G-Fund participants will receive when they redeem their investments. On the other hand, the fund balance in the Civil Service fund does not affect the ultimate payments that retirees and their surviving dependents will receive because the payments will be made from the Treasury general fund even if the Civil Service fund’s assets are fully liquidated. Appendix I provides additional information on how gains and losses may occur in accounts with investment authority. develop a chronology of significant events related to the 2003 debt evaluate the actions taken during the 2003 debt issuance suspension period in relation to the normal policies and procedures Treasury uses for investments and redemptions for major federal government accounts with investment authority, analyze the financial aspects of Treasury’s actions taken during the 2003 debt issuance suspension period and assess the legal basis of these actions, and analyze the impact of the policies and procedures Treasury used to manage the debt during the 2003 debt issuance suspension period. To develop a chronology of the significant events related to the 2003 debt issuance suspension period, we obtained and reviewed applicable documents. We also discussed Treasury’s actions during the debt issuance suspension period with senior Treasury officials. To evaluate the actions taken during the 2003 debt issuance suspension period in relation to the normal policies and procedures Treasury uses for certain federal government accounts with investment authority, we obtained an overview of the policies and procedures used and reviewed selected investment and redemption activity to determine whether those transactions were processed in accordance with Treasury’s normal policies and procedures. Over 200 different federal government accounts with investment authority hold Treasury obligations, and Treasury officials stated that normal investment and redemption policies and procedures were used for all but 3 of these accounts. From the federal government accounts with investment authority for which Treasury used its normal investment and redemption policies and procedures, we selected for review accounts with (1) investments in Treasury obligations that exceeded $10 billion on January 31, 2003 (17 accounts), or (2) recurring investment or redemption transactions of $1 billion or more from February through May 2003 (8 accounts). For 18 of these 25 accounts, we reviewed selected investment and redemption transactions from February through May 2003. For the remaining 7 accounts, which are managed by the Bureau of the Public Debt, we reviewed all investment and redemption transactions from February through May 2003 except those related to 1 account. For this account, we reviewed all investment and redemption transactions that exceeded $250 million. The 25 selected federal government accounts with investment authority accounted for about 77 percent, or about $2.1 trillion, of the $2.8 trillion in Treasury obligations held by federal government accounts with investment authority on January 31, 2003. For all 25 selected accounts in our review, we confirmed with personnel from the respective agencies the total amount of investment and redemption activity reported by Treasury from February 1, 2003, through May 31, 2003. In any case where normal investment and redemption policies and procedures were not followed, we obtained documentation and other information to help us understand the basis for and impact of the alternative policies and procedures that were used. To analyze the financial aspects of Treasury’s actions that departed from normal investment and redemption policies and procedures, we (1) reviewed the methodologies Treasury developed to minimize the impact of such departures on the G-Fund, ESF, and the Civil Service fund; (2) quantified the impact of the departures; (3) assessed whether any principal and interest losses were fully restored; and (4) assessed whether any losses were incurred that could not be restored under Treasury’s current statutory authority. To assess the legal basis for Treasury’s departures from its normal policies and procedures, we identified the applicable legal authorities and determined how Treasury applied them during the 2003 debt issuance suspension period. Our evaluation included authorities related to issuing and redeeming Treasury obligations during a debt issuance suspension period and restoring losses after such a period has ended. To analyze the impact of the policies and procedures used by Treasury to manage the debt during a debt issuance suspension period, we reviewed the actions taken and the Treasury policies and procedures used during the 2003 debt issuance suspension period. To determine the stated policies and procedures used that related to the Civil Service fund and FFB exchange transactions, we discussed with Treasury officials the actions taken during this period and examined the support for these actions. We also compiled and analyzed source documents relating to previous debt issuance suspension periods, including executive branch legal opinions, memorandums, and correspondence. We performed our work from February 2003 through March 2004, in accordance with U.S. generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of the Treasury or his designee. The written response from Treasury’s Under Secretary for Domestic Finance is reprinted in appendix V. In June 2002, the debt ceiling was raised to $6.4 trillion. In December 2002, Treasury concluded that this amount might be reached in the latter half of February 2003. Table 1 shows the significant actions the Congress and the executive branch took from June 28, 2002, through June 30, 2003, that relate to the debt ceiling. Federal government accounts with investment authority that are authorized to invest their receipts, such as the Civil Service fund, the G-Fund, the Social Security funds, and the Federal Employee Health Benefits Fund, are generally authorized or required to invest them in nonmarketable Treasury obligations. Under normal conditions, Treasury is notified by the appropriate agency of the amount that should be invested or reinvested on its behalf, and Treasury then makes the investment. In some cases, the actual obligation that Treasury should purchase is also specified. When a federal government account with investment authority needs to pay benefits and expenses, Treasury is normally notified of the amount and the date that the disbursement is to be made. Depending on the account, Treasury may also be notified to redeem specific obligations. Based on this information, Treasury redeems an account’s obligations. Our analysis of the 25 major federal government accounts with investment authority for which Treasury stated it had followed its normal investment and redemption policies and procedures during the 2003 debt issuance suspension period showed that for all but 1 account—the Highway Trust Fund—Treasury used its normal investment and redemption policies and procedures to handle receipts and maturing investments and to redeem Treasury obligations. Table 2 lists the federal government accounts with investment authority included in our analysis. On February 27, 2003, Treasury redeemed about $343 million of Highway Trust Fund obligations in error. In March 2003, during its normal reconciliation processes, Treasury identified this error. Although normally such errors are corrected by investing the funds redeemed in error on the date the error is detected, Treasury did not do so. Rather, it decided to hold the excess funds in an uninvested funds account until they were needed to pay Highway Trust Fund expenses. The funds were used to pay the fund’s expenses through March 24, 2003. According to Treasury officials, the primary reasons for not making the necessary reinvestment transaction on the date the error was detected and validated were that (1) the Highway Trust Fund does not earn interest on its investments and (2) the time necessary to identify the error and fully understand its impact meant that very little time actually elapsed when the funds could have been invested. Therefore, the Highway Trust Fund was not harmed by Treasury’s decision to not invest the funds. However, Treasury officials subsequently agreed that the over-redemption should have been reinvested on the day the error was detected and adequate information was available to understand the amount that should have been invested, regardless of whether the Highway Trust Fund earns interest on its investments. Holding the excess funds in an uninvested funds account reduced the amount of debt subject to the debt ceiling by no more than $343 million for 26 days during the 2003 debt issuance suspension period. To determine whether Treasury would have exceeded the debt ceiling if it had not committed this error or had reinvested the over-redeemed amount of funds when the error was discovered, we reviewed the invested balances in the G-Fund during this period. As noted elsewhere in this report, Treasury used the G-Fund during the 2003 debt issuance suspension period to ensure that the investment activities associated with federal government accounts with investment authority, such as the Highway Trust Fund, do not cause Treasury to exceed the debt ceiling. Based on our review, we found that the debt ceiling would not have been exceeded even if Treasury had not made the original error or had invested these funds when the error was detected, since other policies and procedures would have ensured a corresponding reduction in the amount of funds invested on behalf of the G-Fund. For example, on February 27, 2003, the computation Treasury used to determine the amount that should be invested in the G-Fund showed that Treasury could invest about $22.9 billion of G-Fund receipts. If the Highway Trust Fund error had not been made, this computation would have shown that Treasury could have invested about $22.6 billion in the G-Fund, or about $0.3 billion less than what was actually invested. Therefore, the amount of debt subject to the debt ceiling would have remained unchanged from its reported $6.4 trillion level. Subsection 8438(g)(1) of title 5, United States Code, authorizes the Secretary of the Treasury to suspend the issuance of additional amounts of obligations of the United States to the G-Fund if the issuance cannot be made without causing the amount of public debt to exceed the debt ceiling. Each day from February 20, 2003, to May 27, 2003, Treasury determined the amount of funds that the G-Fund would be allowed to invest in Treasury obligations and, when necessary, suspended some investments and reinvestments of the G-Fund receipts and maturing obligations that would have caused the debt ceiling to be exceeded. On February 20, 2003, when the Secretary of the Treasury determined that a debt issuance suspension period had begun, the G-Fund held about $48.3 billion of Treasury obligations that would mature that day. To ensure that it did not exceed the debt ceiling, Treasury did not reinvest about $8.5 billion of these obligations on that date. The amount of the G-Fund’s receipts that Treasury invested changed daily, depending on the amount of the federal government’s outstanding debt. Although Treasury can accurately predict the outcome of some events that affect the outstanding debt, it cannot precisely determine the outcome of others until they occur. For example, the amount of obligations that Treasury will issue to the public from an auction can be determined some days in advance because Treasury can control the amount that will be issued. On the other hand, the amount of savings bonds that will be issued and redeemed and the amount of obligations that will be issued to, or redeemed by, various federal government accounts with investment authority are difficult to precisely predict. Because of these difficulties, Treasury needed a way to ensure that the normal investment and redemption activities associated with Treasury obligations did not cause the debt ceiling to be exceeded and also to maintain normal investment and redemption policies for the majority of these accounts. To do these things, each day during the debt issuance suspension period, Treasury calculated the amount of debt subject to the debt ceiling, excluding the receipts that the G-Fund would normally invest; determined the amount of G-Fund receipts that could safely be invested without exceeding the debt ceiling and invested this amount in Treasury obligations; and suspended investment, when necessary, of the G-Fund’s remaining receipts. For example, on February 27, 2003, the amount of debt subject to the debt ceiling, excluding the G-Fund’s requested investment of about $49 billion, was about $6,377 billion or about $23 billion below the debt ceiling. Accordingly, Treasury invested about $23 billion in the G-Fund. The remaining $26 billion was uninvested. In accordance with law, interest on the uninvested funds was paid once the debt issuance suspension period ended. During the 2003 debt issuance suspension period, the G-Fund lost about $362.5 million in interest because its excess funds were not fully invested. Subsection 8438(g)(3) of title 5, United States Code, requires the Secretary of the Treasury to make the G-Fund whole by restoring any losses once the debt issuance suspension period has ended. On May 27, 2003, when the debt ceiling was raised, Treasury fully invested the G-Fund’s receipts and on May 28, 2003, fully restored the lost interest on the G-Fund’s uninvested funds. Consequently, the G-Fund was fully compensated for its interest losses during the 2003 debt issuance suspension period. We verified that after this interest payment, the G-Fund’s obligation holdings were, in effect, the same as they would have been had the debt issuance suspension period not occurred. Actions Related to ESF On several occasions from March 31, 2003, through May 23, 2003, Treasury did not reinvest some of the maturing obligations held by ESF. Because ESF’s obligations are considered part of the federal government’s outstanding debt subject to the debt ceiling, that debt is reduced when the Secretary of the Treasury does not reinvest ESF’s maturing obligations. Since ESF was not fully invested, it incurred interest losses of $3.6 million during the 2003 debt issuance suspension period. The Secretary of the Treasury is not authorized by law to restore these losses. The purpose of ESF is to help provide a stable system of monetary exchange rates. The law establishing ESF authorizes the Secretary of the Treasury to invest ESF’s balances not needed for program purposes in obligations of the federal government. This law also gives the Secretary of the Treasury the sole discretion for determining when, and if, the excess funds will be invested. During previous debt ceiling crises, Treasury exercised the option of not reinvesting ESF’s maturing Treasury obligations, which helped the federal government to stay within the debt ceiling and enabled Treasury to subsequently raise additional cash. During the 2003 debt issuance suspension period, the Secretary of the Treasury redeemed certain Treasury obligations held by the Civil Service fund earlier than normal and suspended the investment of certain Civil Service fund receipts. In addition, as discussed later, the Civil Service fund exchanged Treasury obligations it held for a $15 billion FFB 9(a) obligation. Subsection 8348(k)(1) of title 5, United States Code, authorizes the Secretary of the Treasury to redeem obligations or other invested assets of the Civil Service fund before maturity to prevent the amount of public debt from exceeding the debt ceiling. The statute does not require that early redemptions be made only for the purpose of making Civil Service fund payments. Further, the statute permits early redemptions even if the Civil Service fund has adequate cash balances to cover such payments. Before redeeming Civil Service fund obligations earlier than normal, the Secretary of the Treasury must determine that a debt issuance suspension period exists. The statute authorizing the debt issuance suspension period and its legislative history are silent as to how to determine the length of a debt issuance suspension period. On April 4, 2003, the Secretary of the Treasury declared that a debt issuance suspension period, as it relates to the Civil Service fund, would begin no later than April 11, 2003, and would last until July 11, 2003. On May 19, 2003, the Secretary of the Treasury extended this period until December 19, 2003. On April 8, 2003, and May 20, 2003, Treasury redeemed about $12.2 billion and $20.2 billion, respectively, of the Civil Service fund’s Treasury obligations using its authority under subsection 8348(k)(1) of title 5, United States Code. The $32.4 billion redemption amount was determined based on (1) the length of the initial debt issuance suspension period (April 8 through July 11, 2003) and the related extension (through December 19, 2003) and (2) the estimated monthly Civil Service fund benefit payments that would occur during that time. These were appropriate factors to use in determining the amount of Treasury obligations to redeem early. Treasury redeemed about $12.2 billion early to cover the obligations associated with the May, June, and July 2003 estimated benefit payments on April 8, 2003. As such, when May’s benefit payments were due, Treasury redeemed only the $60 million difference between the amount that had been redeemed early for the month of May and the actual amount of benefit payments to be made. Subsection 8348(j)(1) of title 5, United States Code, authorizes the Secretary of the Treasury to suspend additional investment of amounts in the Civil Service fund if the investment cannot be made without causing the amount of public debt to exceed the debt ceiling. From April 8, 2003, through May 26, 2003, the Civil Service fund had about $2.5 billion in receipts that were not invested. On May 27, 2003, after the debt ceiling was raised, these receipts were invested. When the Secretary of the Treasury redeems obligations earlier than normal or refrains from promptly investing Civil Service fund receipts because of debt ceiling limitations, the Secretary is required by subsection 8348(j)(3) of title 5, United States Code, to immediately restore, to the maximum extent practicable, the Civil Service fund’s obligation holdings to the proper balances when a debt issuance suspension period ends and to restore lost interest on the next normal interest payment date. Consequently, Treasury took the following actions once the debt issuance suspension period had ended: Treasury invested about $30.8 billion of uninvested receipts on May 27, 2003. These receipts were associated with (1) collections made by the Civil Service fund that had not been invested and (2) funds associated with the early redemptions that had not been used for benefit payments and expenses. Treasury paid the Civil Service fund on June 30, 2003, about $100.8 million as compensation for principal and interest losses incurred because of the actions it had taken. This was the first semiannual interest payment date since the debt issuance suspension period ended. June 30, 2003, was the proper restoration date according to the statute authorizing the restoration. We verified that after these transactions the Civil Service fund’s obligation holdings were, in effect, the same as they would have been had the debt issuance suspension period not occurred. During fiscal year 2003, Treasury initiated the following actions involving the Civil Service fund, FFB, and the Treasury general fund related to its efforts to (1) address FFB cash flow issues resulting from previously issued FFB 9(a) obligations to the Civil Service fund and (2) manage the amount of debt subject to the debt ceiling: On October 18, 2002, FFB redeemed prior to maturity $15 billion in FFB 9(a) obligations held by the Civil Service fund. The $15 billion in FFB 9(a) obligations do not count against the debt ceiling. These FFB 9(a) obligations were the result of a series of transactions stemming from a Treasury-directed exchange of Treasury obligations held by the Civil Service fund for FFB 9(a) obligations to assist Treasury in managing the debt during the 1985 debt ceiling crisis. This early redemption resulted in a loss of over $1 billion on October 18, 2002, to the Civil Service fund because of lost interest. On March 5, 2003, FFB issued an FFB 9(a) obligation of about $15 billion to the Civil Service fund in exchange for about $15 billion in Treasury obligations that had been held by the Civil Service fund. FFB used the Treasury obligations to purchase FFB 9(b) obligations held by the Secretary of the Treasury. As a result, the FFB 9(b) debt obligations were canceled and the Treasury obligations that were no longer outstanding were canceled. Consequently, Treasury was provided about $15 billion in additional borrowing authority under the debt ceiling. On June 30, 2003, FFB redeemed early the 9(a) obligation it had issued to the Civil Service fund on March 5. Treasury reinvested the FFB redemption proceeds in accordance with its normal investment policies and procedures. Our review found that on March 5, 2003, and June 30, 2003, the Civil Service fund received fair value based on a present value analysis for the obligations it surrendered. However, whether the Civil Service fund will have any long-term gains or losses associated with these transactions will not be known for some time. Gains or losses on the exchange of obligations between the Civil Service fund and FFB can result when (1) the exchange occurs or (2) the underlying assumptions used to determine the exchange price are not realized. We have found that the initial transactions between FFB and the Civil Service fund relating to a given period in which Treasury was experiencing debt ceiling difficulties were fair to both parties on the date of the exchange. However, quantifying the long-term effects of these transactions on the parties involved is difficult and complex because the exchanges were structured to last many years. The longer the period in the analysis used to evaluate the fairness of a given transaction, such as a present value analysis, the greater the probability that the underlying assumptions used to determine the original exchange price will not accurately reflect the future years’ events. This risk is also incurred when the obligations relating to an exchange remain outstanding for a long time. When the assumptions used to determine the initial exchange prices are not realized (e.g., the obligation is redeemed sooner than expected), gains and losses can result from interest rate changes and reinvestment of the repayment in obligations that do not have comparable maturities. For further discussion on the limitations of using a present value methodology to determine gains and losses, see appendix II. In some cases, we have been able to quantify the gains or losses that have occurred or can be expected to occur that relate to the fiscal year 2003 transactions. However, in other cases, the information needed to understand the potential consequences of the actions taken on March 5 and June 30, 2003, will not be available for a number of years, and we are unable to determine the potential impacts at this time. Table 3 summarizes the gains and losses associated with the fiscal year 2003 transactions between the Civil Service fund, FFB, and the Treasury general fund that we have been able to quantify and those that cannot be determined at this time. As discussed in the preceding narrative and shown in table 3, it is difficult to quantify all the losses and gains associated with the transactions between FFB and the Civil Service fund. A more detailed explanation of these gains and losses, as well as the reasons why not all of the effects of these transactions can be quantified at this time, is provided in appendix III. Regardless of whether they sustain any additional gains or losses over the long term, the Civil Service fund, FFB, and the Treasury general fund incurred increased risks of gains and losses that they would not have incurred if these transactions had not occurred. More important, the risks related to the transactions between FFB and the Civil Service fund are not typically incurred by these organizations during their normal operations. It is important to remember that the risks associated with these exchange transactions are not undertaken for programmatic reasons. Rather, they are made at the direction of the Secretary of the Treasury to help manage the federal government’s operations when debt ceiling difficulties occur. FFB and Treasury have flexibilities that allow them to structure transactions that reduce or even eliminate the losses that FFB can incur. However, similar flexibilities are not available to the Civil Service fund. Furthermore, although the Secretary of the Treasury has statutory authority to restore losses resulting from not investing Civil Service fund receipts or from early redemption of Treasury obligations held by the Civil Service fund during a debt issuance suspension period, the Secretary does not have the statutory authority to restore the types of losses, discussed above, that result from exchange transactions. Appendix IV discusses transactions between the Civil Service fund and FFB that related to previous debt management difficulties. As we noted in our December 2002 report, documented policies and procedures would allow Treasury to better determine the potential impacts associated with the policies and procedures it implements to manage the amount of debt subject to the debt ceiling. Although Treasury adopted our recommendation and developed policies and procedures for managing investment and redemption activities of the Civil Service fund and the G- Fund during a debt issuance suspension period, such policies and procedures do not address how exchange transactions between the Civil Service fund and FFB should be handled. While we recognize that Treasury needs a great deal of flexibility to structure transactions that fit specific events, we believe that guidelines related to exchange transactions between the Civil Service fund and FFB can be developed that minimize the risk to both parties. It is the process of documenting the policies and procedures that allows Treasury management to ascertain the effects of these policies and procedures and whether those effects introduce any additional risks to the parties involved. In addition, documenting the policies and procedures allows Treasury to understand whether it may need additional statutory authority to ensure that all funds are adequately protected. Furthermore, if effectively implemented, documentation of the policies and procedures reduces the chance for confusion and risk of errors should Treasury need to use the policies and procedures in the future. These points were discussed in our December 2002 report to Treasury. During our review of the actions taken during the 2003 debt issuance suspension period that were affected by those policies and procedures, we found that none of the problems or potential problems that we discovered in the 2002 debt issuance suspension period had occurred. The Secretary of the Treasury can take many actions to manage federal government operations during a debt issuance suspension period. In some cases, these actions pose no long-term financial risk to affected parties because of the statutory authorities currently available to the Secretary of the Treasury. As noted earlier, Treasury used these authorities to restore, in total, $463 million in losses incurred by the G-Fund and Civil Service fund. However, other actions expose the affected parties to financial risks that are not normally incurred as part of their programmatic operations. Whether the risks associated with specific actions result in actual losses or gains may not be known until many years after the action has been taken. History has shown, however, that the risks may be substantial. For example, according to FFB estimates, on October 18, 2002, the Civil Service fund lost interest of over $1 billion on a $15 billion transaction entered into in 1985 because of the unexpected early redemption of 9(a) obligations issued by FFB and unforeseen interest rate changes. Treasury lacks the statutory authority to restore such losses and has not developed the documented policies and procedures that can be used to minimize such losses in future exchanges between FFB and federal government accounts with investment authority, such as the Civil Service fund. We recommend that the Secretary of the Treasury perform the following two actions: Seek the statutory authority to restore the losses associated with the October 2002 early redemption of FFB 9(a) obligations. The amount of the restoration should be computed in a manner that maintains equity between the Civil Service fund and Treasury. Direct the Under Secretary for Domestic Finance to document the necessary policies and procedures that should be used for exchange transactions between FFB and a federal government account with investment authority during a debt issuance suspension period and seek any statutory authority necessary to implement the policies and procedures. In written comments on a draft of this report, Treasury agreed with our recommendations and stated that (1) it will seek statutory authority to restore losses incurred by federal government accounts with investment authority and by FFB as a result of actions taken for the purpose of fiscal management during a “debt limit impasse” and (2) it will document appropriate policies and procedures that should be used for exchange transactions between FFB and a federal government account with investment authority to ensure long-term fairness to all parties. Treasury has stated that the authority it will seek includes the restoration of the losses associated with the October 2002 early redemption of FFB 9(a) obligations as we recommended. Until Treasury develops its specific legislative proposal and the policies and procedures it will use relating to transactions between FFB and federal government accounts with investment authority, we cannot determine the scope of the statutory authority it may seek. Treasury also noted that it has already taken certain steps in documenting the policies and procedures that should be used in future exchange transactions. Treasury stated that it plans to use FFB’s independent auditor to “ensure that the terms and structure clearly achieve the intended accounting result and long-term financial fairness to all parties, prior to transaction approval and execution.” Treasury and its independent auditor will need to ensure that this arrangement does not result in a problem with auditor independence under U.S. generally accepted government auditing standards. The independence standard requires that auditors should avoid situations that could lead reasonable third parties with knowledge of the relevant facts and circumstances to conclude that the auditor is not able to maintain independence in conducting its financial statement audit. For example, audit organizations should not perform management functions or make management decisions for entities that they also audit. Specific technical comments provided orally by Treasury were incorporated in this report as appropriate. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Appropriations; the Senate Committee on Governmental Affairs; the Senate Committee on the Budget; the Senate Committee on Finance; the Subcommittee on Financial Management, the Budget, and International Security, Senate Committee on Governmental Affairs; the House Committee on Appropriations; the House Committee on Government Reform; the House Committee on the Budget; the House Committee on Ways and Means; the Subcommittee on Government Efficiency and Financial Management, House Committee on Government Reform; and the Subcommittee on Civil Service and Agency Organization, House Committee on Government Reform. We are also sending copies of this report to the Secretary of the Treasury, the Under Secretary for Domestic Finance of the Department of the Treasury, the Inspector General of the Department of the Treasury, the Director of the Office of Management and Budget, and other agency officials. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you need further assistance or if you or your staff have any questions concerning this report, please contact Chris Martin, Senior Level Technologist, at (202) 512-9481 or Louise DiBenedetto, Assistant Director, at (202) 512-6921. Other key contributors to this report were Wendy M. Albert, Arkelga L. Braxton, and Richard T. Cambosos. Gains and losses can be broken down into two main categories: (1) gains and losses associated with normal investment and redemption activity and (2) gains and losses associated with unusual events, such as a debt issuance suspension period. Treasury’s long-standing position is that gains and losses associated with normal investment and redemption activity are borne by the applicable federal government account and that no special action should be taken to adjust an account’s investment portfolio for these gains and losses. On the other hand, when the loss is incurred because of unusual events and account participants have a vested interest in the fund, Treasury has, in many cases, received the necessary authority to restore such losses. Federal government accounts with investment authority generally invest in interest-bearing nonmarketable Treasury obligations. The investment and redemption activities related to these obligations can cause gains and losses from, for example, changing interest rates and certain errors that are found and corrected. Treasury has a long-standing position that gains and losses associated with normal investment and redemption activities are a cost of doing business. Therefore, Treasury makes no attempt to adjust an account’s investment portfolio for such activities. As noted earlier, one of Treasury’s basic management policies for federal government accounts with the authority to invest is to maintain equity between these accounts and the general fund—the fund used to pay most government obligations. To do so, Treasury issues two basic types of nonmarketable obligations—market-based and par value specials. Most market-based obligations are mirror images of existing Treasury obligations that are traded on the open market and are purchased or sold at open market prices. Par value specials, on the other hand, are issued and redeemed at par. The interest rates for par value specials are specified in the enabling statute or by administrative action. For example, for the G-Fund, Civil Service fund, and Social Security funds, the par value rate is based on the average rate for comparable marketable obligations, as defined by Treasury, with 4 or more years to maturity. This rate is established monthly, and all investments for a given month must bear the same rate. When a federal government account with investment authority needs to redeem obligations to pay benefits and expenses, Treasury redeems these obligations and pays the fund the par value plus any accrued interest. Although only certain accounts are allowed to invest in par value specials, the majority of the $2.8 trillion of account investments on January 31, 2003, were invested in par value specials. Equity between accounts investing in par value specials and the Treasury general fund is not maintained because (1) the interest rate is determined only monthly and (2) the term of the investment is not relevant, as shown in the following examples: The interest rate used to invest an account’s receipts is determined only monthly. If market interest rates fall during the month, the Treasury general fund pays the account more interest than market conditions dictate; if market interest rates rise during the month, the investment account receives less interest than market conditions dictate. Many federal government accounts with investment authority holding par value specials hold these obligations for a number of years. Accordingly, the interest rates can vary significantly. For example, the Civil Service fund has obligations that carry interest rates ranging from 3.5 percent to 8.75 percent in its portfolio that matures on June 30, 2005. Even the rates for the portfolio that matures on June 30, 2014, range from 3.5 percent to 6.5 percent. However, when the obligations are needed to pay benefits, they are redeemed at par regardless of current market rates. In times of high interest rates, redeeming a low-interest- rate obligation at par benefits the account redeeming the par value special. On the other hand, during periods of low interest rates, redeeming obligations at par benefits Treasury’s general fund. The interest rate paid on Treasury obligations with 4 or more years to maturity is based on a statutory formula developed in the 1920s to ensure equal semiannual interest payments for obligations held for exactly 1 year. However, as we noted in our 1987 report on the Civil Service fund, when investments are held for less than a year, Treasury’s method does not ensure that the account is neither overcompensated nor undercompensated. In the major accounts with investment authority, such as the Civil Service and Social Security funds, a large number of investments in par value specials are subsequently redeemed, sometimes just days later, for benefit payments and expenses, rather than held to their maturity. Activity associated with current-year investments that were subsequently redeemed in the current investment year for program benefits and expenses can be significant. Such activity totaled well over $100 billion dollars between January 31, 2003, and June 30, 2003, for the Civil Service and Social Security funds. In addition, as noted elsewhere, the G-Fund, whose investments receive the par value rate, redeems and reinvests its entire portfolio each business day. Treasury makes many adjustments to the accounting records to reflect accounting events. Reasons for adjustments may include (1) information received late from an account caused by agreed-upon processing delays such as those associated with the Social Security funds and (2) certain errors made by either Treasury or the account. We found in a 1987 review that the procedures for making adjustments to accounts holding par value specials, which are still being used, do not ensure that the results of adjustments are equitable. For example, during our 1987 review we noted that one error that Treasury made and corrected cost the Civil Service fund almost $400,000 in lost interest earnings. Specifically, according to Treasury records, the Office of Personnel Management (OPM) instructed Treasury to redeem about $400 million of obligations on behalf of the Civil Service fund on July 5, 1984. However, Treasury did not make this redemption until OPM notified Treasury of the error in August. Treasury then redeemed the lowest-interest-bearing obligations available at that time, which had rates of 8.75 and 9.75 percent. The interest earnings for this redemption were computed through July 5 (the original requested redemption date). Had the redemption taken place on July 5, the obligations bearing interest rates of 7.5 and 7.625 percent would have been used because the portfolio held lower-rate obligations at that time. As a result, the Civil Service fund lost about $400,000 of interest earnings. Treasury agreed with our methodology for computing the effects of this error and with the amount of the loss. Treasury and the Congress have a long-standing position of obtaining the necessary authority to restore interest that was not credited to an account with investment authority because of unusual events. GAO, the Congress, Treasury, and agencies associated with the accounts commonly refer to this forgone interest as a loss to the fund. Several examples follow. In OPM’s comments on our report on the actions taken during the 1985 debt ceiling crisis, it stated that the Civil Service fund “should be ‘made whole’ when available funds are not properly invested. This is especially important for situations . . . when the lost interest as the result of debt ceiling limitations.” Section 6002 of the Omnibus Budget Reconciliation Act of 1986 added subsections (j), (k), and (l) to section 8348 of title 5, United States Code, (1) to authorize the Secretary to suspend investment of amounts in the Civil Service fund in government obligations and to redeem prior to maturity government obligations held by the Civil Service fund when necessary to avoid exceeding the debt ceiling and (2) to authorize the Secretary to make the fund whole after the debt issuance suspension period. The joint explanatory statement of the committee of conference accompanying the Omnibus Budget Reconciliation Act of 1986 states that the amendment requires the Secretary “to make the Fund whole for any earnings lost as a result of the suspension or disinvestment by a combination of special cash payment actions.” Treasury’s July 30, 2003, letter to the Congress concerning the 2003 debt issuance suspension period stated that Treasury has paid interest “totaling $100,822,854.44, representing the amount that would have been earned, but for the debt issuance suspension period.” Treasury also noted that this “represents the interest lost” by the Civil Service fund. It is also a long-standing practice for the Congress and the President to provide the necessary authority to restore losses caused by unusual events. For example, during the 1985 debt ceiling crisis, Treasury was granted the authority to restore the majority of interest losses associated with its actions to avoid exceeding the debt ceiling. Furthermore, as recommended in our report on the 1985 debt ceiling crisis, Treasury received the authority in 1986 and 1987 to fully restore the losses associated with certain actions it takes in regard to the Civil Service fund and G-Fund during debt ceiling difficulties. A present value analysis is used to provide a basis for understanding the value of an obligation using current market conditions when that obligation is being purchased, sold, or exchanged before maturity. The present value of an obligation depends on (1) the coupon rate, (2) the length of time the obligation is outstanding, and (3) the current market rate (commonly referred to as the discount factor). Table 4 shows a simple example of the present values of three $1 million obligations bearing a coupon rate of 6 percent with three different maturities and using three different discount factors. As shown in table 4, when the discount factor differs from the coupon rate, the present value of an obligation will differ from the face value—the longer the time interval, the greater the increase or decrease in value. As noted in our discussion on the effects of exchanges of obligations between the Civil Service fund, the Federal Financing Bank (FFB), and the Department of the Treasury (Treasury), Treasury used a present value analysis to help ensure that the exchange of Treasury obligations held by the Civil Service Retirement and Disability Fund (Civil Service fund) for obligations issued by FFB was fair to both parties. The present value approach was also used to determine the amount of losses incurred by the Civil Service fund when FFB repaid its obligations before they were scheduled to mature. A key assumption in making a present value calculation is that the underlying assumptions on interest rates and cash flows will not change. For example, if a present value calculation shows that an obligation’s cash flows are worth $1 billion today assuming that the $1 billion can be invested in a 4 percent obligation that matures on June 30, Year 2, then it is critical that the investment be made in an obligation that bears an interest rate of 4 percent and that the obligation matures on June 30, Year 2. Otherwise, a gain or loss can occur if interest rates change, as shown in table 5. Although the initial exchange was fair, as shown in table 5, since the actual terms of the obligations issued were not the same as those used in the present value assumption, the account is subject to risks associated with interest rate changes. Another limitation associated with a present value analysis is that it does not consider reinvestment risk. For example, in the case of the March 5, 2003, exchange between FFB and the Civil Service fund, the Treasury obligations exchanged matured from June 30, 2004, through June 30, 2011. However, the FFB 9(a) obligation received had a different cash flow. Therefore, if the principal and interest payments associated with the FFB 9(a) obligation could not be invested at the same discount factor used in the present value analysis, then a gain or loss would result. If the cash flows can be invested at a higher interest rate, then a gain will occur. Conversely, if the cash flows are reinvested at a lower rate, then a loss will occur. During the 1985 debt ceiling crisis, Treasury for the first time invested excess receipts of the Civil Service fund in FFB 9(a) obligations. Because FFB 9(a) obligations are not subject to the debt ceiling, this action allowed Treasury to borrow more cash from the public. At the time of the purchase, these FFB 9(a) obligations carried the same terms and conditions as the Treasury obligations held by the Civil Service fund. As such, as long as FFB did not redeem the debt obligations prior to maturity or the obligations were not otherwise redeemed before needed to pay Civil Service fund expenses in accordance with its normal redemption policies and procedures, the exchange transaction would result in no adverse consequences for the Civil Service fund. However, on October 18, 2002, FFB exercised its right to redeem its obligations before maturity, which resulted in over $1 billion in interest losses to the Civil Service fund. According to FFB calculations, the present value interest loss to the Civil Service fund was over $1 billion when FFB redeemed its obligations. FFB appropriately calculated this loss using a present value methodology that assumed that the Civil Service fund could invest the $15 billion proceeds from the early redemption of the FFB 9(a) obligations at 3.875 percent— the October 2002 investment rate for Civil Service fund investments—and the funds could be invested with the same maturities as the redeemed FFB 9(a) obligations. Table 6 shows a comparison of the maturity dates and interest rates associated with the FFB 9(a) obligations that were redeemed early. FFB’s present value analysis assumed that the redemption proceeds would be invested at 3.875 percent using the same maturity dates that were applicable to the original FFB 9(a) obligations. The redemption proceeds were actually invested in a 3.875 percent obligation that matured on June 30, 2003, since Treasury’s normal policies and procedures require that current-year receipts be invested in obligations that mature on June 30 of the current investment year. On June 30, 2003, $10 billion of the October 18, 2002, investment was, in effect, reinvested in obligations bearing an interest rate of 3.5 percent—the rate applicable to Civil Service fund investments for June 2003. Accordingly, Treasury invested $10 billion with $5 billion maturing on June 30, 2004, and $5 billion on June 30, 2005, at an interest rate of 3.5 percent. The remaining $5 billion that was received on October 18, 2002, was used to pay current-year fund benefits and expenses and therefore was not available for reinvestment on June 30, 2003. Although FFB redemption proceeds were invested with the same maturity dates as the original FFB 9(a) obligations, they will be invested for a time at 3.5 percent rather than the 3.875 percent assumed in the present value analysis. Therefore, in addition to the over $1 billion interest loss incurred on October 18, 2002, discussed above, the Civil Service fund will incur about $33.4 million of additional interest losses (commonly referred to as a nominal interest loss) in these future years because of the lower-than-assumed interest rate on the reinvested amounts. Table 7 compares the expected interest earnings associated with the October 18, 2002, FFB 9(a) redemption prior to maturity using the present value assumptions and the expected interest earnings that would be received if the obligations were held to maturity. In our report on the 1985 debt ceiling crisis, we noted that Treasury officials stated that a basic trust fund management policy is to ensure equity between the various trust funds and the Treasury general fund—the fund used to pay most government obligations—and that none of the funds unduly benefit from Treasury’s management. Although the losses discussed in this section resulted from a transaction between FFB and the Civil Service fund, the transaction between these two funds was not initially undertaken for programmatic reasons; rather, it was undertaken by the Secretary of the Treasury to help manage debt during the 1985 debt ceiling crisis, and the early redemption in 2002 was undertaken to help manage FFB’s cash flow problems. On March 5, 2003, Treasury exchanged certain Treasury obligations held by the Civil Service fund for an FFB 9(a) obligation of about $15 billion issued by FFB to the Civil Service fund. The purpose of this transaction, similar to the purpose of the transaction that occurred during the 1985 debt ceiling crisis discussed above, was to make about $15 billion of additional borrowing authority available under the debt ceiling. Figure 2 shows the process for debt ceiling relief during fiscal year 2003. However, unlike the terms and conditions of the 1985 exchange transaction, the terms and conditions of the FFB 9(a) obligation issued to the Civil Service fund during the 2003 debt issuance suspension period were different from those of the Treasury obligations surrendered. Specifically, the terms of the FFB 9(a) obligation held by the Civil Service fund stated that if FFB redeemed its obligation before maturity, the redemption price would be based on current market rates rather than par value, which was the basis used in the 1985 exchange. Therefore, to ensure that the value of the exchange was fair to both parties on the date of the exchange, Treasury used a present value analysis to compare the value of future cash flows expected from the Treasury obligations being exchanged by the Civil Service fund with the value of future cash flows expected from the FFB 9(a) obligation. On June 30, 2003, FFB redeemed its March 5, 2003, 9(a) obligation before the December 2035 maturity date. As discussed below, these transactions introduced risks to the Civil Service fund that it would not have incurred had this exchange not taken place. The transactions between the Civil Service fund and FFB fairly compensated the Civil Service fund, based on a present value analysis, on the date of the exchanges. The net result of the March 5, 2003, and June 30, 2003, transactions between the Civil Service fund and FFB was that the Civil Service fund had about $1.153 billion more in Treasury obligations than it did before the March 5, 2003, transaction. This increase in Treasury obligations held by the Civil Service fund occurred because the prevailing market interest rates at the time of the exchanges were lower than the rates of the Treasury obligations exchanged with FFB. However, the Civil Service fund had a gain of only about $139.5 million because it had to invest the proceeds from the obligation FFB redeemed on June 30, 2003, at a lower interest rate. In other words, the Civil Service fund had more principal to invest but was unable to invest that principal at a rate as high as the rate of the Treasury obligations it had surrendered. Therefore, the Civil Service fund needed more principal to generate approximately the same returns as the obligations it had originally surrendered during the transaction on March 5, 2003. The long-term economic effect of the June 30, 2003, transaction on the Civil Service fund depends on the terms of the obligations in which the proceeds are invested. In this case, one way to have helped ensure that the Civil Service fund would not have cash flow gains or losses associated with investment of the proceeds from the FFB redemption would have been to invest the proceeds using a methodology that ensured that the fund had cash flows similar to those from the original Treasury obligations used for the exchange on March 5, 2003. This methodology is commonly referred to as a “cash flow” approach. However, the cash flow approach can also result in gains and losses, since it does not consider the reinvestment risks that may be present. Appendix II discusses how the cash flow methodology ensures that a cash flow gain or loss does not occur and how reinvestment risks are not considered in this methodology. Treasury’s approach for investing the June 30, 2003, FFB redemption proceeds was to apply its normal investment policies and procedures. In this case, Treasury, in effect, (1) replaced the dollar value of the obligations used for the March 5, 2003, exchange with 3.5 percent Treasury obligations and (2) divided the remaining proceeds equally over a 15-year period. While this approach differs from the cash flow approach and may result in future gains and losses, the key point is that Treasury has not yet developed documented policies and procedures for managing such transactions. The process of documenting the policies and procedures that should be used for such transactions allows Treasury’s management to understand the impacts of various alternatives and whether they introduce any additional risks to the parties involved. It also helps Treasury evaluate whether it may need additional statutory authority to ensure that all accounts are adequately protected. Further, if effectively implemented, documentation of policies and procedures reduces the chance for confusion and risk of errors should Treasury need to use the policies and procedures in the future. FFB and the Treasury general fund had gains and losses associated with the March 5, 2003, and the June 30, 2003, transactions. As shown in table 3, the net result for FFB of these two transactions was a $633 million loss on June 30, 2003. FFB expects to earn about $1.153 billion in future years to offset this loss. The Treasury general fund also lost $520 million, which is not expected to be recovered. Several key decisions and actions related to the March 5, 2003, and June 30, 2003, transactions are discussed below. On March 5, 2003, Treasury purchased from FFB the Treasury obligations (par value specials) that FFB had acquired from the Civil Service fund. Treasury agreed to pay FFB about $520 million more than the par value of these obligations. As payment for this purchase, Treasury sold back to FFB 9(b) obligations issued by FFB that Treasury held. In effect, Treasury canceled about $15.7 billion of the FFB 9(b) obligations it held with about $15.2 billion of Treasury par value specials that FFB had received from the Civil Service fund. Therefore, FFB had a gain and the Treasury general fund had a corresponding loss on the exchange. Table 8 shows how this transaction generated a gain for FFB. The March 5, 2003, exchange was in contrast to FFB’s October 18, 2002, early redemption of its 9(a) obligations held by the Civil Service fund. On October 18, 2002, Treasury decided that the FFB 9(a) obligations being redeemed prior to maturity that were related to Treasury's effort to manage the 1985 debt ceiling would be redeemed at par value and that the Civil Service fund would incur the loss. Treasury’s redemption of par value specials in excess of their par value is also in contrast to its normal policies and procedures, which allow agencies holding the par value specials only to redeem them from Treasury at face value to pay for the fund’s benefits and expenses. If Treasury had accepted the par value specials at par rather than at current market rates, then the total losses to FFB would have been about $1.153 billion rather than the $633 million total net loss resulting from the March 5, 2003, and June 30, 2003, transactions. The $1.153 billion is also the amount of the gain FFB expects to make in future periods. The key difference between the March 5, 2003, exchange and the normal exchanges between Treasury and federal government accounts with investment authority related to their investments in par value specials is that for the March 5, 2003, exchange between Treasury and FFB, a present value analysis was used to calculate the amount of debt that should be removed from Treasury’s books—the same analysis that Treasury used to ensure that the exchange between FFB and the Civil Service fund was fair. Whether FFB or the Treasury general fund incurs a gain or loss when par value specials are used to cancel FFB 9(b) obligations depends greatly on the value of the Treasury obligations held by the Civil Service fund that are selected for the exchange. For example, if the interest rates on the Treasury obligations held by the Civil Service fund had been 3.875 percent rather than 5.25 percent, Treasury would have exchanged par value specials with a value of about $16.2 billion held by the Civil Service fund with FFB, and FFB would have provided these to Treasury to cancel the $15.7 billion of FFB 9(b) obligations. In this case, Treasury would have recognized a gain rather than the loss that was recorded because 5.25 percent par value specials were used in the exchange. Table 9 provides a simplified example of how this works. As discussed earlier, when FFB decided on June 30, 2003, to redeem prior to maturity the 9(a) obligation it had issued to the Civil Service fund, another present value analysis was performed. As a result of this analysis, FFB had to borrow about $16.6 billion from Treasury using 9(b) obligations to redeem the $15 billion FFB 9(a) obligation it had issued to the Civil Service fund. FFB needed these additional funds because the FFB 9(a) obligation was based on an interest rate yield of 5.25 percent and the interest rate used in the present value analysis was 3.5 percent (the June 2003 Civil Service fund investment rate). According to FFB officials, the following approach was used to structure this $16.6 billion loan from Treasury: FFB borrowed $15.4 billion using principal repayments that mirrored principal payments used in the original FFB 9(a) obligation to the Civil Service fund, which, in turn, mirrored the underlying FFB loans made to its borrowers. For example, if FFB held a loan that called for a $10 million principal payment on December 31, 2005, then FFB would have borrowed $10 million from Treasury with a December 31, 2005, repayment date. In effect, after these transactions, FFB’s loan repayments for its 9(b) obligations to Treasury mirrored the underlying loan principal repayments that FFB expected to receive from its loan portfolio. FFB borrowed about $1.1 billion using a short-term obligation. The $1.1 billion corresponds to FFB’s net loss of $633 million and the Treasury general fund’s loss of $520 million, which were realized on the March 5, 2003, and June 30, 2003, transactions. According to FFB’s 2003 financial statements, FFB expects to recover its loss in future years. FFB repaid the short-term $1.1 billion 9(b) obligation on April 1, 2004, since FFB had adequate cash flows from its loans to make these payments. According to FFB officials, these increased cash flows resulted from (1) the reduced interest costs associated with the October 18, 2002, early redemption of FFB 9(a) obligations issued to the Civil Service fund noted earlier in this report and (2) the reduced interest costs associated with the June 30, 2003, FFB 9(b) obligations that were used to redeem the FFB 9(a) obligation issued to the Civil Service fund on March 5, 2003. Therefore, once the short-term 9(b) obligation is redeemed, the future principal payments associated with FFB’s loan portfolio will, for all practical purposes, mirror the principal payments that will be made to Treasury. However, the interest earnings on FFB’s loan portfolio will be far greater than the interest payments that will be due to Treasury on FFB 9(b) obligations issued to Treasury. This interest rate differential will then translate into increased earnings for FFB that can be expected to offset the losses associated with the 2003 exchange transactions with the Civil Service fund. During the 1985 debt ceiling crisis, Treasury for the first time invested excess Civil Service fund receipts of the Civil Service fund in FFB 9(a) obligations. Treasury has also exchanged Treasury obligations held by the Civil Service fund for obligations held or issued by FFB when Treasury experienced debt ceiling difficulties during the 1995/1996 debt ceiling crisis and the 2003 debt issuance suspension period. These exchanges and their effects on the Civil Service fund are discussed below. During the 1985 debt ceiling crisis, Treasury for the first time exchanged about $15 billion of Treasury obligations held by the Civil Service fund for obligations issued by FFB. The purpose of this transaction was to make $15 billion of additional borrowing authority available under the statutory debt ceiling. At the time the transaction was made, these FFB obligations were mirror images of the Treasury par value specials held by the Civil Service fund. As long as the FFB obligations were held to maturity or redeemed in accordance with the normal redemption policies of the Civil Service fund, this transaction would result in no adverse financial consequences for the Civil Service fund. As noted earlier in this report, it was not until October 2002 that the Civil Service fund portfolio was affected by this transaction. During the 1995/1996 debt ceiling crisis, Treasury exchanged about $8.6 billion of Treasury obligations held by the Civil Service fund for federal agency obligations held by FFB. The purpose of this transaction was to make an additional $8.6 billion of borrowing authority available under the statutory debt ceiling. Since the federal agency obligations held by FFB differed from the terms and conditions of the obligations held by the Civil Service fund, the task of determining a fair exchange price was more complicated than in 1985. Because the effects of these differences in terms and conditions can be significant, a generally accepted methodology was used that considered such factors as (1) the current market rates for outstanding Treasury obligations at the time of the exchange, (2) the probability of changing interest rates, (3) the probability of the federal agency paying off the debt early, and (4) the premium the market would provide to an obligation that could be redeemed at par regardless of market interest rates. Treasury then obtained the opinion of an independent third party to determine whether its valuations were accurate. In 1997, portions of the obligations received in this transaction were repaid early. Since the original analysis included a factor for the risk associated with the federal agency redeeming its obligations early, the Civil Service fund did not suffer any adverse consequences. During the 2003 debt issuance suspension period, Treasury once again exchanged Treasury obligations held by the Civil Service fund for a $15 billion FFB 9(a) obligation. However, unlike the 1985 exchange, the terms and conditions associated with the FFB 9(a) obligation was not identical to the terms and conditions of the Treasury obligations held by the Civil Service fund. Therefore, to ensure that the transaction was fair to both parties, Treasury performed a present value analysis of the cash flows associated with the FFB obligation and the cash flows associated with the Treasury obligations held by the Civil Service fund. Furthermore, it was agreed that if FFB redeemed this obligation before maturity, the price paid would be based on current market rates. An agreement between FFB, Treasury, and Treasury on behalf of the Civil Service fund allowed the Secretary of the Treasury on behalf of the Civil Service fund to redeem the FFB 9(a) obligation at par. As noted earlier, in June 2003 FFB redeemed this obligation and the Civil Service fund had a $139.5 million gain. We have previously reported on aspects of Treasury’s actions during the 2002 debt issuance suspension period and earlier debt ceiling crises in the following reports: Debt Ceiling: Analysis of Actions During the 2002 Debt Issuance Suspension Periods. GAO-03-134. Washington, D.C.: December 13, 2002. Debt Ceiling: Analysis of Actions during the 1995/1996 Crisis. GAO/AIMD-96-130. Washington, D.C.: August 30, 1996. Information on Debt Ceiling Limitations and Increases. GAO/AIMD-96- 49R. Washington, D.C.: February 23, 1996. Debt Ceiling Limitations and Treasury Actions. GAO/AIMD-96-38R. Washington, D.C.: January 26, 1996. Social Security Trust Funds. GAO/AIMD-96-30R. Washington, D.C.: December 12, 1995. Debt Ceiling Options. GAO/AIMD-96-20R. Washington, D.C.: December 7, 1995. Civil Service Fund: Improved Controls Needed Over Investments. GAO/AFMD-87-17. Washington, D.C.: May 7, 1987. Treasury’s Management of Social Security Trust Funds during the Debt Ceiling Crises. GAO/HRD-86-45. Washington, D.C.: December 5, 1985. A New Approach to the Public Debt Legislation Should Be Considered. FGMSD-79-58. Washington, D.C.: September 7, 1979. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
GAO is required to review the steps taken by the Department of the Treasury (Treasury) to avoid exceeding the debt ceiling during the 2003 debt issuance suspension period. The committee also directed GAO to determine whether all major accounts that were used for debt ceiling relief have been properly credited or reimbursed. Accordingly, GAO determined whether Treasury followed its normal investment and redemption policies and procedures for the major federal government accounts with investment authority, analyzed the financial aspects of actions Treasury took during this period, and analyzed the impact of policies and procedures Treasury used to manage the debt during the period. On February 20, 2003, Treasury determined that a debt issuance suspension period was in effect. A debt issuance suspension period is any period for which the Secretary of the Treasury has determined that obligations of the United States may not be issued without exceeding the debt ceiling. During this period, which lasted until May 27, 2003, the Secretary took actions related to the Government Securities Investment Fund of the Federal Employees' Retirement System (the G-Fund), the Civil Service Retirement and Disability Fund (the Civil Service fund), and the Exchange Stabilization Fund (ESF) to avoid exceeding the debt ceiling. Also, during fiscal year 2003, the Secretary initiated several actions involving the Civil Service Fund, FFB, and the Treasury general fund that related to Treasury's efforts to manage the amount of debt subject to the debt ceiling. The Secretary took other actions to avoid exceeding the debt ceiling, such as suspending the sales of State and Local Government Series Treasury obligations and recalling noninterest- bearing deposits held by commercial banks as compensation for banking services provided to Treasury. The actions taken, which were consistent with legal authorities provided to the Secretary and related to the G-Fund, the Civil Service fund, and ESF, initially resulted in interest losses to the G-Fund and ESF and principal and interest losses to the Civil Service fund. When the debt ceiling was increased to $7.4 trillion on May 27, 2003, the Secretary fully invested the G-Fund's investments and on May 28, 2003, fully restored the interest losses, as required by law. On June 30, 2003, the Secretary fully compensated the Civil Service fund for principal and interest losses, as required by law. The losses related to ESF could not be restored without special legislation. As a result, related ESF losses of $3.6 million were not restored. The actions initiated by Treasury in fiscal year 2003 that involved the early redemption of FFB debt obligations held by the Civil Service fund and exchanges of obligations among the Civil Service fund, FFB, and the Treasury general fund resulted in all three parties realizing gains or incurring losses. In some cases, GAO has been able to quantify the gains or losses that occurred as a result of these transactions. For example, according to FFB estimates, the Civil Service fund lost more than $1 billion in interest because of FFB's redemption of FFB obligations held by the Civil Service fund before their maturity date and unforeseen interest rate changes. In other cases, however, information needed to understand the potential consequences of these actions will not be available for a number of years. The Secretary currently lacks the statutory authority to restore such losses and has not developed documented policies and procedures that can be used to minimize such losses in future actions that may be taken by Treasury that involve FFB and an account with investment authority such as the Civil Service fund.
Shortly after the September 11, 2001, terrorist attacks, Congress passed, and the President signed into law, the Aviation and Transportation Security Act, which established TSA and gave the agency responsibility for securing all modes of transportation, including the nation’s civil aviation system, which includes domestic and international commercial aviation operations. In accordance with 49 U.S.C. § 44907, TSA assesses the effectiveness of security measures at foreign airports served by a U.S. air carrier, from which a foreign air carrier serves the United States, that pose a high risk of introducing danger to international air travel, and at other airports deemed appropriate by the Secretary of Homeland Security. This provision of law also identifies measures that the Secretary must take in the event that he or she determines that an airport is not maintaining and carrying out effective security measures based on TSA assessments. TSA also conducts inspections of U.S. air carriers and foreign air carriers servicing the United States from foreign airports pursuant to its authority to ensure that air carriers certificated or permitted to operate to, from, or within the United States meet applicable security requirements, including those set forth in an air carrier’s TSA-approved security program. The Secretary of DHS delegated to the Assistant Secretary of TSA the responsibility for conducting foreign airport assessments but retained responsibility for making the determination that a foreign airport does not maintain and carry out effective security measures. Currently, TSA’s Security Operations and Transportation Sector Network Management divisions are jointly responsible for conducting foreign airport assessments and air carrier inspections. Table 1 highlights the roles and responsibilities of certain TSA positions within these divisions that are responsible for implementing the foreign airport assessment and air carrier inspection programs. TSA conducts foreign airport assessments to determine the extent to which foreign airports maintain and carry out effective security measures in order to ensure the security of flights bound for the United States. Specifically, TSA assesses foreign airports using 86 of the 106 aviation security standards and recommended practices adopted by ICAO, a United Nations organization representing nearly 190 countries. (See app. II for a description of the 86 ICAO standards and recommended practices TSA uses to assess security measures at foreign airports.) While TSA is authorized under U.S. law to conduct foreign airport assessments at intervals it considers necessary, TSA may not perform an assessment of security measures at a foreign airport without permission from the host government. During fiscal year 2005, TSA scheduled assessments by categorizing airports into two groups. Category A airports—airports that did not exhibit operational issues in the last two TSA assessments—were assessed once every 3 years, while category B airports—airports that did exhibit operational issues in either of the last two TSA assessments, or were not previously assessed—were assessed annually. Based on documentation provided by TSA, during fiscal year 2005, TSA assessed aviation security measures in place at 128 foreign airports that participated voluntarily in TSA’s Foreign Airport Assessment Program. TSA’s assessments of foreign airports are conducted by a team of inspectors, which generally includes one team leader and one team member. According to TSA, it generally takes 3 to 7 days to complete a foreign airport assessment. However, the amount of time required to conduct an assessment varies based on several factors, including the size of the airport, the number of air carrier station inspections to be conducted at the airport, the threat level to civil aviation in the host country, and the amount of time it takes inspectors to travel from the international field office (IFO) to the airport where the assessment will take place. An additional 2 weeks is required for inspectors to complete the assessment report after they return to the IFO. As shown in figure 1, regarding the process for conducting a foreign airport assessment, before TSA can assess the security measures at a foreign airport, the Transportation Security Administration Representative (TSAR) must first obtain approval from the host government to allow TSA to conduct an airport assessment and to schedule the date for an on-site visit to the foreign airport. During the assessment, the team of inspectors uses several methods to determine a foreign airport’s level of compliance with international security standards, including conducting interviews with airport officials, examining documents pertaining to the airport’s security measures, and conducting a physical inspection of the airport. For example, the inspectors are to examine the integrity of fences, lighting, and locks by walking the grounds of the airport. Inspectors also make observations regarding access control procedures, such as looking at employee and vehicle identification methods in secure areas, as well as monitoring passenger and baggage screening procedures in the airport. At the close of an airport assessment, inspectors brief foreign airport and government officials on the results of the assessment. TSA inspectors also prepare a report summarizing their findings on the airport’s overall security posture and security measures, which may contain recommendations for corrective action and must be reviewed by the TSAR, the IFO manager, and TSA headquarters officials. If the inspectors report that an airport’s security measures do not meet minimum international security standards, particularly critical standards, such as those related to passenger and checked baggage screening and access controls, TSA headquarters officials are to inform the Secretary of Homeland Security. If the Secretary, based on TSA’s airport assessment results, determines that a foreign airport does not maintain and carry out effective security measures, he or she must, after advising the Secretary of State, take secretarial action. Figure 2 describes in detail the types of secretarial action the Secretary may take during such instances. There are three basic types of secretarial action: 90-day action—The Secretary notifies foreign government officials that they have 90 days to address security deficiencies that were identified during the airport assessment and recommends steps necessary to bring the security measures at the airport up to ICAO standards. Public notification—If, after 90 days, the Secretary finds that the government has not brought security measures at the airport up to ICAO standards, the Secretary notifies the general public that the airport does not maintain and carry out effective security measures. Modification to air carrier operations—If, after 90 days, the Secretary finds that the government has not brought security measures at the airport up to ICAO standards: The Secretary may withhold, revoke, or prescribe conditions on the operating authority of U.S.-based and foreign air carriers operating at that airport, following consultation with appropriate host government officials and air carrier representatives, and with the approval of the Secretary of State. The President may prohibit a U.S.-based or foreign air carrier from providing transportation between the United States and any foreign airport that is the subject of a secretarial determination. Suspension of service— The Secretary, with approval of the Secretary of State, shall suspend the right of any U.S.-based or foreign air carrier to provide service to or from an airport if the Secretary determines that a condition exists that threatens the safety or security of passengers, aircraft, or crew traveling to or from the airport, and the public interest requires an immediate suspension of transportation between the United States and that airport. Along with conducting airport assessments, the same TSA inspection team also conducts air carrier inspections when visiting a foreign airport to ensure that air carriers are in compliance with TSA security requirements. Both U.S. and foreign air carriers with service to the United States are subject to inspection. As of February 2007, TSA guidance required TSA to inspect each U.S. air carrier station once a year, except for those airports in which TSA has determined to be an “extraordinary” location, where inspections are to occur twice a year. Foreign air carriers are to be inspected twice in a 3-year period at each foreign airport, except in extraordinary locations, where they are to be inspected annually. According to documentation provided by TSA, during fiscal year 2005, TSA conducted 529 inspections of foreign and U.S. air carriers serving the United States from foreign airports. When conducting inspections, TSA inspectors examine compliance with applicable security requirements, including TSA-approved security programs, emergency amendments to the security programs, and security directives. Air carrier security programs are based on the Aircraft Operator Standard Security Program for U.S.-based air carriers and the Model Security Program for foreign air carriers, which serve as guidance for what an air carrier needs to include in its own security program. The Aircraft Operator Standard Security Program is designed to provide for the safety of passengers and their belongings traveling on flights against acts of criminal violence, air piracy, and the introduction of explosives, incendiaries, weapons, and other prohibited items onboard an aircraft. Likewise, the Model Security Program is designed to prevent prohibited items from being carried aboard aircraft, prohibit unauthorized access to airplanes, ensure that checked baggage is accepted only by an authorized carrier representative, and ensure the proper handling of cargo to be loaded onto passenger flights. When TSA determines that additional security measures are necessary to respond to a threat assessment or to a specific threat against civil aviation, TSA may issue a security directive or an emergency amendment to an air carrier security program that sets forth additional mandatory security requirements. Air carriers are required to comply with each applicable security directive or emergency amendment issued by TSA, along with the requirements already within their security programs and any other requirements set forth in applicable law. Appendix III provides additional information on security requirements for U.S. and foreign air carriers serving the United States from foreign airports. Although U.S.-based and foreign air carriers are guided by different standards within the Aircraft Operator Standard Security Program and the Model Security Program, inspections for both of these entities are similar. As in the case of airport assessments, air carrier inspections are conducted by a team of inspectors, which generally includes one team leader and one team member. An inspection of an air carrier typically takes 1 or 2 days, but can take longer depending on the extent of service by the air carrier. Inspection teams may spend several days at a foreign airport inspecting air carriers if there are multiple airlines serving the United States from that location. During an inspection, inspectors are to review applicable security manuals, procedures, and records; interview air carrier station personnel; and observe air carrier employees processing passengers from at least one flight from passenger check-in until the flight departs the gate to ensure that the air carrier is in compliance with applicable requirements. Inspectors evaluate a variety of security measures, such as passenger processing including the use of No-Fly and Selectee lists, checked baggage acceptance and control, aircraft security, and passenger screening. Inspectors record inspection results into TSA’s Performance and Results Information System (PARIS) system, a database containing security compliance information on TSA-regulated entities. If an inspector finds that an air carrier is violating any applicable security requirements, additional steps are to be taken to record those specific violations and, in some cases, pursue them with further investigation. Figure 3 provides an overview of the air carrier inspection and documentation process, including the options for what type of penalty, if any, should be imposed on air carriers for identified security violations. When an inspector identifies a violation of a security requirement, a record of the violation is opened in PARIS. According to guidance issued by TSA to inspectors, there are various enforcement tools available to address instances of noncompliance discovered during an inspection: On-the-spot counseling is generally to be used for noncompliance that is minor and technical in nature, and can be remedied immediately at the time it is discovered. When this course of action is taken, the inspector notes that the noncompliance issue was closed with TSA counseling in the finding record and no further action is required. Administrative action is generally to be used for violations or alleged violations that are unintentional, not the result of substantial disregard for security, where there are no aggravating factors present, or first- time violations. An administrative action results in either a letter of correction or a warning notice being issued to the air carrier. Civil penalties in the form of fines are generally to be used in response to cases involving egregious violations, gross negligence, or where administrative action and counseling did not adequately resolve the noncompliance. Fines can range between $2,500 and $25,000 based on the severity of the violation. If the violation is severe enough, TSA may also recommend revocation of the air carrier’s certification to fly into the United States, but this action has not yet been taken by TSA. If a violation is resolved with on-the-spot counseling, that fact is recorded in the finding record of PARIS and the matter is closed. However, if the inspector opts to pursue administrative action or a civil penalty against the air carrier, an enforcement investigation record is opened, and an investigation is conducted. Based on the investigation findings, the inspector recommends either an administrative action or a civil penalty, depending on the finding and the circumstances. If the investigation does not provide evidence that a violation occurred, the matter is closed with no action taken. If the inspector makes a recommendation for an administrative action, the supervisory inspector or IFO manager will typically review the recommendation and, if appropriate, approve and issue the action. The supervisory inspector may also recommend that the action be changed to no action or to a civil penalty. In the case of the latter, the case will be referred to the Office of Chief Counsel for further review. In those cases where the inspector recommends that a civil penalty be assessed on the air carrier, it is referred to the Office of Chief Counsel for review. The office is responsible for ensuring that the action is legally sufficient, and that the recommended fine is consistent with agency guidelines. TSA’s Office of Chief Counsel makes the final determination for any legal enforcement action. The office may approve the proposed action or make a recommendation for other actions, including administrative action or no action at all. Based on the results of TSA’s foreign airport assessments, during fiscal year 2005, some foreign airports and air carriers complied with all relevant aviation security standards, while others did not. The most common area of noncompliance for foreign airports was related to quality control— mechanisms to assess and address security vulnerabilities at airports. The Secretary of Homeland Security determined that the security deficiencies at two foreign airports assessed during fiscal year 2005 were so serious that he subsequently notified the general public that these airports did not meet international aviation security standards. In addition to assessing the security measures implemented by the airport authority at foreign airports, TSA also inspected the security measures put in place by air carriers at foreign airports. When security deficiencies identified during air carrier inspections could not be corrected or addressed immediately, TSA inspectors recommended enforcement action. TSA officials stated that while it is difficult to determine whether the assessment and inspection results are generally positive or negative, the cumulative foreign airport assessment and air carrier inspection results may be helpful in identifying the aviation security training needs of foreign aviation security officials. TSA does not have its own program through which aviation security training and technical assistance are formally provided to foreign aviation security officials. However, TSA officials stated that they could use the results of TSA’s foreign airport assessments to refer foreign officials to training and technical assistance programs offered by ICAO and several other U.S. government agencies. Of the 128 foreign airports TSA assessed during fiscal year 2005, TSA data show that at the completion of these assessments, 46 (about 36 percent) complied with all ICAO standards reviewed by TSA, while 82 (about 64 percent) did not meet at least one ICAO standard reviewed by TSA. For these 82 foreign airports, the average number of standards not met was about 5, and the number of standards not met by an individual airport ranged from 1 to 22. Foreign airports were most frequently not meeting ICAO standards related to quality control. TSA data show that about 39 percent of foreign airports assessed during fiscal year 2005 did not comply with at least one ICAO quality control standard, which include mechanisms to assess and address security vulnerabilities at airports. For example, one airport did not meet an ICAO quality control standard because it did not have a mechanism in place to ensure that airport officials implementing security controls were appropriately trained and able to effectively perform their duties. In another instance, an airport did not comply with an ICAO quality control standard because, during its previous two assessments, inspectors found that the airport did not require or have records of background investigations conducted for individuals implementing security controls at the airport. Another area in which airports were not meeting ICAO quality control standards was the absence of a program to ensure the quality and effectiveness of their National Civil Aviation Security Program. TSA officials stated that quality control deficiencies may be prevalent among foreign airports in part because there is no international guidance available to aviation security officials to help them develop effective quality control measures. However, TSA officials stated that ICAO and other regional aviation security organizations offer training courses to help aviation security officials worldwide in developing effective quality control measures. TSA data also identified that at the completion of the assessment, nearly half of the foreign airports assessed during fiscal year 2005 did not meet at least one of the 17 ICAO standards that TSA characterized as “critical” to aviation security. According to TSA, access control, screening of checked baggage, and screening of passengers and their carry-on items are critical aspects of aviation security because these measures are intended to prevent terrorists from carrying dangerous items, such as weapons and explosives, onto aircraft. TSA data identified that some foreign airports assessed during fiscal year 2005 did not meet at least one access control standard. TSA data also identified that some foreign airports did not meet ICAO standards related to checked baggage screening. One of the baggage screening deficiencies TSA identified involved foreign airports not taking steps to prevent checked baggage from being tampered with after the baggage had been screened, prior to the baggage being placed on the aircraft. TSA data also identified that some foreign airports assessed during fiscal year 2005 did not meet ICAO standards related to passenger screening. One of the passenger-screening problems identified by TSA involved screening personnel not resolving hand-held metal detector or walk-through metal detector alarms to determine whether the individuals being screened were carrying prohibited items. Even if a foreign airport does not meet multiple aviation security standards, including critical standards, TSA may determine that such deficiencies do not warrant review by the Secretary of Homeland Security. However, if TSA determines that secretarial action may be warranted and the Secretary of Homeland Security, based on TSA’s assessment, determines that a foreign airport does not maintain and carry out effective security measures, he or she must take secretarial action. Since the inception of DHS in March 2003, the Secretary of Homeland Security has taken action against five foreign airports he determined were not maintaining and carrying out effective security measures, four of which received 90-day action letters. The Secretary notified the public of his determination with respect to two of these airports—Port-au-Prince Airport in Haiti and Bandara Ngurah Rai International Airport in Bali, Indonesia—both of which were assessed during fiscal year 2005. TSA officials told us that the decision to take secretarial action against an airport is not based solely on the number and type of security deficiencies identified during TSA airport assessments. Rather, the secretarial action decision is based on the severity of the security deficiencies identified, as well as past compliance history, threat information, and the capacity of the host government to take corrective action. For example, there were other foreign airports assessed during fiscal year 2005 that did not comply with about the same number and type of critical ICAO standards as the five airports that received secretarial action. However, according to the former Deputy Director of TSA’s Compliance Division, secretarial action was not taken against these airports either because the security deficiencies were determined to be not as severe, the host country officials were capable of taking immediate corrective action to address the deficiencies, or TSA did not perceive these airports to be in locations at high risk of terrorist activity. Table 2 demonstrates how two foreign airports—one for which secretarial action was taken and the other for which no secretarial action was taken—have about the same number and types of critical deficiencies, but differ in the severity of the deficiencies and their capability to take immediate corrective action to address identified deficiencies. According to TSA, secretarial actions are lifted when the Secretary, in part based on TSA’s assessment of the airport, determines that the airport is carrying out and maintaining effective security measures. TSA lifted the secretarial action at Port-au-Prince airport in Haiti in July 2006, 19 months after the public notification was issued. During this 19-month period, TSA assisted Haitian officials in developing a national civil aviation security plan and provided training on how to properly screen passengers and their carry-on baggage. According to the former Deputy Director of TSA’s Compliance Division, although TSA determined earlier during 2006 that all of the security deficiencies at the airport had been addressed by Haitian officials, based on specific intelligence information regarding threats to the airport in Haiti, the Secretary delayed lifting the secretarial action until July 2006. As of February 2007, the public notification for the airport in Bali was still in place. TSA officials stated that they are in frequent contact with Indonesian officials to discuss Indonesia’s progress in addressing security deficiencies at the airport. TSA officials also stated that they are awaiting Indonesian officials’ request for TSA to conduct an airport assessment to determine whether the security deficiencies at the airport in Bali have been addressed. In addition to assessing the security measures implemented by the airport authority at foreign airports, TSA also inspected the security measures put in place by air carriers at foreign airports. According to air carrier inspection data maintained by TSA, during fiscal year 2005, of the 529 inspections of air carriers operating out of foreign airports, there were 373 inspections (about 71 percent) for which the air carrier complied with all TSA security requirements, and 156 inspections (about 29 percent) for which the air carrier did not comply with at least one TSA security requirement. For these 156 inspections, the average number of TSA requirements not met was about 3, and the number of TSA requirements not met by an individual inspected air carrier ranged from 1 to 18. The total number of security requirements against which air carriers were inspected generally ranged from about 20 to 80, depending on the location of the foreign airport in which the air carrier operated, the extent of a carrier’s operation at the airport, and whether the carrier was a U.S.-based or foreign-based carrier. During fiscal year 2005 air carrier inspections, TSA identified security deficiencies in several areas, including aircraft security and passenger and checked baggage screening. Because TSA has authority to regulate air carriers that provide service to the United States from foreign airports, TSA inspected air carriers against specific security requirements established by TSA and included in the air carriers’ TSA-approved security programs. TSA officials told us that they view operational security requirements for air carriers as critical—as opposed to documentary requirements associated with the air carrier’s approved security program—because these requirements are designed to prevent terrorists from carrying weapons, explosives, or other dangerous items onto aircraft. When TSA inspectors identify deficiencies that cannot be corrected or addressed immediately, the inspectors are to recommend enforcement action. Based on data provided by TSA, TSA inspectors identified 419 violations (security deficiencies) as a result of the 156 air carrier inspections conducted during fiscal year 2005 where TSA identified at least one security deficiency. Data from TSA showed that 259 violations (about 62 percent) were corrected or addressed immediately. TSA inspectors submitted 76 violations (about 18 percent) for investigation because the violations were considered serious enough to warrant an enforcement action. TSA can impose three types of enforcement action on air carriers that violate security requirements—a warning letter, a letter of correction, or a monetary civil penalty. Based on information included in TSA’s investigation module within PARIS, for the 47 investigations we could link to fiscal year 2005 inspections, warning letters were issued in 26 cases, and letters of correction were issued in 14 cases. Fines ranging from $18,000 to $25,000 were recommended in the 7 cases where inspectors recommended civil penalties be imposed. Of those, fines ranging from $4,000 to $15,000 were ultimately levied in 3 cases, in 1 case a warning notice was issued instead of a civil penalty, and in 2 cases no action was taken. As of December 2006, 1 case remained unresolved. TSA officials stated that it is difficult to draw conclusions about the cumulative foreign airport assessment and air carrier inspection results— such as whether the results are generally positive or negative—because the primary concern is not whether security deficiencies are identified. Instead, TSA officials are more concerned about whether foreign countries have the capability and willingness to address security deficiencies. According to TSA, some foreign countries do not have the aviation security expertise or financial resources to adequately address security deficiencies. TSA officials also stated that some foreign countries do not regard aviation security as a high priority, and therefore do not intend to correct security deficiencies identified during TSA assessments. Further, TSA officials stated that foreign officials’ capability and willingness also influence the extent to which air carriers comply with security requirements. Although TSA has not conducted its own analysis of foreign airport assessment and air carrier inspection results, TSA officials stated that our analysis of the results was consistent with their assumptions regarding the most prominent security deficiencies identified at foreign airports and among air carriers. Additionally, TSA officials stated that the cumulative foreign airport assessment and air carrier inspection results may be helpful in identifying the aviation security training needs of foreign aviation security officials. TSA does not have an internally funded program in place that is specifically intended to provide aviation security training and technical assistance to foreign aviation security officials. However, TSA officials stated that they coordinate with other federal agencies, such as the Department of State and the U.S. Trade and Development Agency, to identify global and regional training needs and provide instructors for the aviation security training courses these federal agencies offer to foreign officials. (See app. IV for a description of the aviation security training and technical assistance programs offered by U.S. government agencies.) While TSA does not always determine which foreign countries would receive aviation security training and technical assistance offered by other federal agencies, TSA officials stated that they could use the cumulative results of TSA’s foreign airport assessments to refer foreign officials to these assistance programs. TSA used various methods to help foreign officials and air carrier representatives address security deficiencies identified during TSA assessments and inspections. However, opportunities remain for TSA to enhance oversight of its foreign airport assessment and air carrier inspection programs. To help foreign airport officials and host government officials address security deficiencies identified during foreign airport assessments, TSA inspectors provided on-site consultation to help address security deficiencies in the short term, made recommendations for addressing security deficiencies over the long term, and recommended aviation security training and technical assistance opportunities for foreign officials to help them meet ICAO standards. During fiscal year 2005, TSA resolved 259 of the 419 security deficiencies identified during TSA inspections through on-site consultation. Additionally, TSA assigned all U.S. air carriers and foreign air carriers to a principal security inspector and international principal security inspector, respectively, to provide counseling or clarification regarding TSA security requirements. Although TSA has assisted foreign airport officials and air carrier representatives in addressing security deficiencies, TSA did not track the status of scheduled airport assessments and air carrier inspections, document foreign governments’ progress in addressing security deficiencies at foreign airports, track enforcement actions taken in response to air carrier violations, and measure the impact of the foreign airport assessment and air carrier inspection programs on security. Such information would have provided TSA better assurance that the foreign airport assessment and air carrier inspection programs are operating as intended. TSA officials stated that while the primary mission of the foreign airport assessment program is to ensure the security of U.S.-bound flights by assessing whether foreign airports are complying with ICAO standards, a secondary mission of the program is to assist foreign officials in addressing security deficiencies that TSA identified during its foreign airport assessments. As part of the foreign airport assessment program, TSA officials assisted foreign authorities in addressing security deficiencies in various ways, including providing on-site consultation to help airport officials or the host government immediately address security deficiencies, making recommendations to airport officials or the host government for corrective action intended to help sustain security improvements, and helping to secure aviation security training and technical assistance for foreign governments. Based on our review of TSA foreign airport assessment reports, during fiscal year 2005, TSA provided on-site consultation to help foreign officials immediately address security deficiencies that were identified during airport assessments and made recommendations to help foreign officials sustain security improvements in the longer term. One type of security deficiency identified during TSA’s fiscal year 2005 foreign airport assessments involved a particular passenger checkpoint screening function. As a short-term solution to this security deficiency, on at least two occasions, TSA inspectors provided on-site training to instruct screeners on proper passenger screening techniques. As a longer-term solution, the assessment reports identify that in some cases, TSA inspectors recommended that the airport conduct remedial training for screeners and routinely test screeners who work at the passenger checkpoint to determine if they are screening passengers correctly. Another security deficiency identified at foreign airports during fiscal year 2005 related to the security of airport perimeters. After identifying this deficiency, inspectors consulted with foreign airport officials who, in a few cases, took immediate action to address the deficiency. According to the assessment reports, in some cases, TSA inspectors recommended measures that would help the airport sustain perimeter security in the longer term. In cases when a short-term solution may not be feasible, TSA inspectors may have only recommended longer-term corrective action. For example, in some cases, TSA inspectors recommended that foreign airport officials embark upon a longer-term construction project to address a particular type of security deficiency. During fiscal year 2005, TSA also assisted foreign governments in securing training and technical assistance provided by TSA and other U.S. government agencies to help improve security at foreign airports, particularly at airports in developing countries. For example, four of the seven TSA Representatives—TSARs—-with whom we met said that they had assisted foreign governments in obtaining training either through the State Department’s Anti-Terrorism Assistance Program or through the U.S. Trade and Development Agency’s aviation security assistance programs. The goals of the Anti-Terrorism Assistance Program are to (1) build the capacity of foreign countries to fight terrorism; (2) establish security relationships between U.S. and foreign officials to strengthen cooperative anti-terrorism efforts; and (3) share modern, humane, and effective anti- terrorism techniques. The State Department addresses the capacity- building goal of the Anti-Terrorism Assistance Program by offering a selection of 25 training courses to foreign officials, 1 of which focuses on airport security. The State Department provided the airport security course, which is taught by TSA instructors, to seven foreign countries during fiscal year 2005—Bahamas, Barbados, Dominican Republic, Kazakhstan, Philippines, Qatar, and United Arab Emirates. The U.S. Trade and Development Agency also provides aviation security training and technical assistance to help achieve its goal of facilitating economic growth and trade in developing countries. During fiscal year 2005, the U.S. Trade and Development Agency provided aviation security training for government officials in Haiti, Malaysia, and sub-Saharan Africa. During the same year, the agency held regional workshops for various countries worldwide on developing quality control programs. Government officials from two of the five countries we visited identified the importance of obtaining quality control training, particularly given that they have not yet established their own quality control function. Appendix IV includes a detailed description of aviation security training and technical assistance provided to foreign officials by the State Department and the U.S. Trade and Development Agency, as well as other U.S. government agencies. Government and airport officials from five of the seven foreign countries we visited and officials from 5 of the 16 foreign embassies we visited stated that TSA’s airport assessments and the resulting assistance provided by TSA have helped strengthen airport security in their countries. For example, officials from one country said that TSA assessments enabled them to identify and address security deficiencies. Specifically, officials stated that the government could not independently identify security deficiencies because it did not have its own airport assessment program—a condition that TSA officials told us exists in many countries. Airport officials in another country stated that TSA’s airport assessments and on-site assistance led to immediate improvements in the way in which passengers were screened at their airport, particularly with regard to the pat-down search procedure. Embassy officials representing another country also stated that TSA’s assessments reinforce the results of other assessments of their airports. In addition, these officials stated that they appreciated the good rapport and cooperative relationships they have with TSA inspection officials. Airport officials in another country we visited stated that TSA assisted them in developing their aviation security management program, and that the results of TSA’s assessments provided them with examples of where they need to concentrate more efforts on meeting ICAO standards. Government officials in this same country said that the TSAR has helped them to comply with ICAO standards related to the contents of a member state’s national aviation security program. At the recommendation of the TSAR, these officials also planned to participate in an aviation security workshop provided by the Organization of American States, which they also felt would be beneficial in helping the government formulate its national aviation security programs and associated security regulations. In addition to assisting foreign officials in addressing security deficiencies identified during airport assessments, TSA also assisted air carrier representatives in addressing security deficiencies that were identified during air carrier inspections. Of the 419 instances in which TSA inspectors identified noncompliance with TSA security requirements during fiscal year 2005, TSA data show 259 were resolved through counseling—that is, the security deficiencies were resolved as a result of on-site assistance or consultation provided by TSA. For example, during one inspection, TSA observed that the security contractor employed by the air carrier was not properly searching the aircraft cabin for suspicious, dangerous, or deadly items prior to boarding. TSA instructed the contractor to fully inspect those locations that were not searched properly, and obtained assurance that the air carrier would provide information to the contractors to ensure proper searches were conducted. In another instance, inspectors identified a security deficiency related to catering carts. The inspectors notified appropriate catering facility officials, who stated that the security deficiency was highly unusual and that it would not happen again. The inspectors also informed the air carrier of the finding and recommended that during the carrier’s internal audits, they ensure that catering carts are properly secured. In addition to counseling provided by inspectors when deficiencies are identified, TSA assigns each air carrier to either a PSI, for U.S.-based air carriers, or an IPSI, for foreign air carriers with service to the United States, to assist air carriers in complying with TSA security requirements. Although PSIs and IPSIs do not participate in air carrier inspections, they do receive the inspection results for the air carriers that they work with. According to the three PSIs and four IPSIs with whom we met, PSIs and IPSIs provide counsel to the air carriers and provide clarification when necessary on TSA security requirements. For example, they provide air carriers with clarification on the requirements contained in security directives and emergency amendments issued by TSA. Several of the foreign air carriers we met with told us that the IPSIs are generally responsive to their requests. In other instances, when an air carrier cannot comply with a TSA security requirement—such as when complying with a TSA security requirement would cause the air carrier to violate a host government security requirement—the air carrier will work with the IPSI or PSI to develop alternative security procedures that are intended to provide a level of security equivalent to the level of security provided by TSA’s requirements, according to the PSIs and IPSIs with whom we met. These alternative procedures are reviewed by the PSI or IPSI and then approved by TSA headquarters officials. TSA has several controls in place to ensure that the agency is implementing the foreign airport assessment and air carrier inspection programs as intended. However, there are opportunities for TSA to improve its oversight of these programs to help ensure that the status and disposition of scheduled foreign airports assessments and air carrier inspections is documented and to assess the impact of the assessment and inspection programs. Regarding the foreign airport assessment program, TSA required inspectors and TSARs to follow standard operating procedures when scheduling and conducting foreign airport assessments. These procedures outline the process for coordinating with host government officials to schedule assessments, conduct foreign airport assessments, and report the results of the assessments. TSA also provided inspectors with a job aide to help them ensure that all relevant ICAO standards are addressed during an assessment. The job aide prompts inspectors as to what specific information they should obtain to help determine whether the foreign airport is meeting ICAO standards. For example, in assessing measures related to passenger-screening checkpoints, the job aide prompts the inspector to describe the means by which the airport ensures there is no mixing or contact between screened and unscreened passengers. In addition to the standard operating procedures and the job aide, TSA requires inspectors to use a standard format for reporting the results of foreign airport assessments and has implemented a multilayered review process to help ensure that airport assessment reports are complete and accurate. With regard to the air carrier inspection program, TSA uses the automated Performance and Results Information System to compile inspection results. PARIS contains results of air carrier inspections conducted by TSA at airports in the United States as well as inspections conducted at foreign airports. For air carrier inspections conducted at foreign airports, a series of prompts guides inspectors regarding what security standards U.S. carriers and foreign carriers operating overseas must meet. PARIS also includes a review process whereby completed inspection results can be reviewed by a supervisory inspector before being approved for release into the database. While TSA has controls such as these in place for the foreign airport assessment and air carrier inspection programs to ensure consistent implementation and documentation, we identified four additional controls that would strengthen TSA’s oversight of the foreign airport assessment and air carrier inspection programs: tracking the status of scheduled airport assessments and air carrier inspections, documenting foreign governments’ progress in addressing security deficiencies, tracking air carrier violations, and measuring the impact of the foreign airport assessment and air carrier inspection programs. TSA has established some controls for tracking the status of scheduled airport assessments and air carrier inspections, but additional controls are needed. TSA provided us with a list of foreign airport assessments that were scheduled to take place during fiscal year 2005 and identified which of the assessments were actually conducted and which assessments were deferred or canceled. We compared the list of scheduled assessments provided by TSA to the fiscal year 2005 airport assessment reports we reviewed and identified several discrepancies. Specifically, there were 10 airport assessments that TSA identified as having been conducted, but when we asked TSA officials to provide the reports for these assessments, they could not, and later categorized these assessments as deferred or canceled. Conversely, there was 1 airport assessment that TSA identified as having been deferred, but according to the assessment reports we reviewed, this assessment was actually conducted during fiscal year 2005. There were also five foreign airports for which TSA provided us with the fiscal year 2005 assessment report, but were not included on TSA’s list of assessments scheduled for fiscal year 2005. Further, there were three foreign airports listed under one IFO as having been deferred, whereas these same airports were listed under another IFO as having been canceled during fiscal year 2005. TSA also did not maintain accurate information on the status of air carrier inspections scheduled for fiscal year 2005. TSA provided us with a list of all air carrier inspections conducted during fiscal year 2005. We compared the list to the results contained in the PARIS database and found numerous inconsistencies. Specifically, we identified 46 air carrier inspections at 18 airports that were not included on TSA’s list, but were included in PARIS as having been conducted during fiscal year 2005. Federal standards for internal controls and associated guidance suggest that agencies should document key decisions in a way that is complete and accurate, and that allows decisions to be traced from initiation, through processing, to after completion. TSA officials acknowledged that they have not always maintained accurate and complete data on the status of scheduled foreign airport assessments and air carrier assessments, in part due to the lack of a central repository in which to maintain assessment information and the lack of standardization in the way in which each IFO manager maintains assessment information. Additionally, IFOs had not always documented the reasons why assessments and inspections were deferred or canceled. TSA officials stated that in August 2006 they began standardizing and refining the existing databases used by IFO staff for tracking the status of foreign airport assessments and air carrier inspections by including data elements such as the dates of previous and planned assessments. TSA officials also stated that IFO staff are now encouraged to identify the reasons why assessments and inspections were deferred or canceled in the comment section of the database. While TSA has made some improvements to the way in which it tracks the status of scheduled foreign airport assessments and air carrier inspections, there are opportunities for additional refinements to TSA’s tracking system. For example, according to our review of TSA’s fiscal year 2007 foreign airport assessment and air carrier inspection schedules, TSA did not provide an explanation for why 13 of 34 foreign airport visits—that is, either assessments or inspections—had not been conducted according to schedule. TSA officials acknowledged that their assessment and inspection tracking system is a work in progress and that they need to make additional decisions regarding the tracking system, such as which data elements to include. Without adequate controls in place for tracking which scheduled assessments and inspections were actually conducted and which were deferred or canceled, it may be difficult for TSA to ensure that all scheduled airport assessments and air carrier inspections are actually conducted. TSARs—the primary liaisons between the U.S. government and foreign governments on transportation security issues—are responsible for following up on progress made by foreign officials in addressing security deficiencies identified during TSA assessments. Although the TSARs we interviewed stated that they conducted such follow-up, the TSARs did not consistently document the progress foreign governments had made in addressing airport security deficiencies. We found 199 instances in the 128 fiscal year 2005 foreign airport assessment reports we reviewed where it was written that the TSAR would follow up or was recommended to follow up on the progress made by foreign officials in addressing security deficiencies identified during airport assessments. However, TSA may not be able to determine whether TSARs had actually followed up on these security deficiencies because TSARs did not consistently document their follow-up activities. We interviewed 8 of the 20 TSARs stationed at embassies throughout the world and one Senior Advisor and DHS attaché. Six of those TSARs stated that they followed up on outstanding security deficiencies in various ways, depending on the severity of the deficiency and the confidence that the TSAR had in the host government’s ability to correct the deficiency. For example, one TSAR told us that for less critical security deficiencies, she may inquire about the foreign government’s status in addressing the deficiency via electronic mail or telephone call. On the other hand, for a critical deficiency, the TSAR said she may follow up in person on the host government’s progress in addressing the deficiency. However, another TSAR stated that she only follows up on the foreign government’s progress in addressing national program issues. She stated that she does not follow up on operational security deficiencies— such as screening of passengers and checked baggage—because she believes this is the responsibility of the TSA inspection staff. While 4 of the 8 TSARs we interviewed told us that they were able to follow up on the status of most or all security deficiencies within their area of responsibility, not all of these TSARs reported the results of their follow- up to TSA inspection staff, in part because they were not required to do so. In addition, TSARs stated that when they did document the results of their follow-up, it was not done consistently. For example, follow-up results were sometimes documented in weekly trip reports (generally electronic mail messages) TSARs send to their immediate supervisor in TSA headquarters or in action plans. In addition, these weekly reports did not always contain information from the TSARs’ follow-up activities with host government or airport officials. Federal standards for internal controls and associated guidance suggest that agencies should document key activities in such a way that maintains the relevance, value, and usefulness of these activities to management in controlling operations and making decisions. TSA headquarters officials acknowledged that it is important to consistently document foreign governments’ status in addressing security deficiencies identified during TSA assessments, because this information could be helpful to TSA inspection staff when determining where to focus their attention during future assessments. Additionally, documenting foreign governments’ progress toward addressing deficiencies would enable TSA to have current information on the security status of foreign airports that service the United States. TSA established a working group in September 2006 to explore how the results of TSAR follow-up should be documented and used by TSA inspection staff. Because of the logistical challenges of coordination among working group members who are located around the world, TSA has not set a time frame for when the working group is expected to complete its efforts. TSA does not maintain air carrier inspection data in a way that would enable the agency to determine what enforcement actions were taken in response to identified security violations and thus could not readily determine whether appropriate penalties, if any, were given to air carriers that violated security requirements. We found two factors that contributed to this situation. First, information on violations and findings was not consistently recorded, and second, TSA does not link enforcement actions to inspection findings. For example, when an inspector identifies a violation during an inspection, that information is recorded in the inspections database in PARIS and a record is to be opened in the findings database. The findings database record includes information related to the violation, including whether the violation was closed with counseling or an investigation was opened. However, we found that information is not maintained in a way that enables TSA to readily determine the enforcement action that was taken in response to a particular violation. For example, the findings database did not include information on the action taken by TSA inspectors for all security violations that were identified in the inspections database. Specifically, the inspections database indicated that during fiscal year 2005, 419 air carrier violations were identified during 156 inspections. However, the findings database only identified the actions taken by TSA inspectors for 335 violations. On further analysis we found that of the 156 inspections where violations were identified, the number of violations for 79 (51 percent) of those inspections were not properly recorded in the findings database. We determined that for 66 inspections, the number of violations identified in the findings database was less than the number of violations identified in the inspections database. Therefore, there is no record of what action was taken, if any, by TSA inspectors to address the additional violations identified during these inspections. We also determined that for 13 inspections, the number of violations identified in the findings database was greater than the number of violations identified in the inspections database. Another reason TSA could not readily identify what enforcement actions were taken in response to specific security violations was that TSA often issued one enforcement action for multiple security violations, where inspectors were not required to identify each individual violation that was addressed by a particular enforcement action. Without being able to readily identify what enforcement action was taken in response to specific security violations, TSA has limited assurance that the inspected air carriers received appropriate penalties, if deemed necessary, and that identified security violations were resolved. TSA officials told us that they are currently developing updates to PARIS that will automatically open a finding each time a violation is recorded in the inspection database. By doing so, this will require a link between a violation and the planned course of action to resolve the violation. However, TSA has not established a time frame for when these updates will be implemented. TSA is taking steps to assess whether the goals of the foreign airport assessment and air carrier inspection programs are being met, but identified several concerns about doing so. As previously discussed, the goal of the foreign airport assessment and air carrier inspection programs are to ensure the security of U.S.-bound flights by evaluating the extent to which foreign governments and air carriers are complying with applicable security requirements. The Government Performance and Results Act of 1993 requires executive branch departments to use performance measures to assess progress toward meeting program goals and to help decision makers assess program accomplishments and improve program performance. Performance measures can be categorized either as outcome measures, which describe the intended result of carrying out a program or activity, or as output measures, which describe the level of activity that will be provided over a period of time, or as efficiency measures, which show the relationship between outcome or output of a program and the resources used to implement program activities—inputs. TSA developed the following output and efficiency measures to evaluate its international aviation regulatory and enforcement efforts, which include foreign airport assessments and air carrier inspections: percentage of countries with last-point-of-departure service to the United States that are provided aviation security assistance at the national or airport level, percentage of countries that do not have last-point-of-departure service to the United States that are provided aviation security assistance at the national or airport level, and average number of international inspections conducted annually per inspector. While output measures are useful in determining the number of foreign countries for which TSA has provided aviation security assistance and the rate at which such assistance is being provided, outcome-based measures would be particularly useful because they could be used to determine the extent to which TSA has helped to improve security at foreign airports that service the United States. However, TSA officials identified several challenges in developing outcome measures, particularly measures for the foreign airport assessment program. TSA officials said that it is difficult to develop meaningful outcome measures because TSA does not have control over whether foreign authorities implement and meet ICAO standards. Additionally, TSA officials stated that if the agency develops outcome measures for the foreign airport assessment program, it would suggest that TSA has control over whether foreign airports meet ICAO standards, which these officials believe may give the appearance that TSA does not respect the sovereignty of the countries it assesses. TSA officials further stated that if foreign officials perceive that TSA has no regard for their country’s sovereignty, foreign officials may prohibit TSA from conducting assessments in their countries. We recognize that whether or not foreign governments meet ICAO standards is not within TSA’s control and that foreign officials’ concerns about sovereignty are important. However, TSA officials have acknowledged that the assistance the agency provides and, in rare cases, secretarial actions contribute to whether foreign governments meet ICAO standards. Also, there is precedent within the federal government for developing outcome-oriented performance measures to evaluate efforts that are not within an agency’s control but can be influenced by the agency. For example, the State Department developed performance measures and targets for its Anti-Terrorism Assistance Program to evaluate the agency’s impact on helping foreign countries improve their anti-terrorism capabilities. Specifically, during fiscal year 2006, the State Department set a performance target that two of the six countries that received assistance through the Anti-Terrorism Assistance Program would achieve a capability to effectively deter, detect, and counter terrorist organizations and threats and sustain those capabilities. Another performance target for the program that is beyond the State Department’s control is for all 191 United Nations member states to implement a particular United Nations resolution that requires all states to take sweeping measures to combat terrorism. TSA headquarters officials, including the Director of Compliance and Area Directors, who oversee implementation of the foreign airport assessment program, questioned whether it would be appropriate to measure improvements made by foreign countries as a result of the assessment program. They stated that the primary purpose of the foreign airport assessment program is not to help foreign officials improve security at their airports; rather, the primary purpose of the foreign airport assessment program is to identify—not correct—security deficiencies at foreign airports and inform the Secretary of Homeland Security of such deficiencies. These officials also stated that the agency’s efforts to assist foreign officials in addressing security deficiencies are voluntary and, therefore, do not warrant performance measurement. Although TSA may not be required to assist foreign officials in addressing security deficiencies identified during foreign airport assessments, TSA is in fact using its inspector and TSAR resources to this end. Consistent with the Government Performance and Results Act of 1993, developing performance measures and associated targets, such as the percentage of security deficiencies that were addressed as a result of TSA on-site assistance and TSA recommendations for corrective action, would enable TSA to evaluate the impact of its assistance on improving security at foreign airports and be held more accountable for the way in which it uses its resources. TSA could also evaluate the impact that secretarial actions have on helping foreign airports address security deficiencies in order to meet ICAO standards. Another challenge faced by TSA officials in developing outcome-based measures for the foreign airport assessment program is the lack of an automated system to collect and compile assessment results. TSA officials stated that in the absence of an automated system to input data and information obtained from airport assessments, they do not have enough resources to manually compile and analyze airport assessment data that could be used to feed into outcome measures. Currently, TSA headquarters maintains airport assessment reports either electronically or in hard copy, which makes it difficult to conduct systematic analysis of assessment results across foreign airports and over time to evaluate the impact TSA’s airport assessment program has had on helping foreign countries meet ICAO standards. TSA officials told us that $1 million was budgeted to develop a secured, automated database—the Foreign Airport Assessment Reporting System—to track airport assessment results. However, TSA officials stated that the development of the Foreign Airport Assessment Reporting System has been slow due to challenges TSA has experienced in linking the existing electronic systems in which previous airport assessment reports are stored with the new database. However, upon completion of the Foreign Airport Assessment Reporting System, which is scheduled for fiscal year 2008, TSA expects that the database will enhance standardization of assessment reports as well as accessibility to the results of previous foreign airport assessments. TSA also expects that the Foreign Airport Assessment Reporting System will enable TSA to conduct analysis of foreign airport assessment results. As with the foreign airport assessment program, TSA has also not developed outcome-based performance measures for its overseas air carrier inspection program. However, TSA officials have begun to collect and analyze data on air carrier inspections that could be used to measure the impact of TSA’s inspection program on helping air carriers comply with TSA security requirements. During fiscal year 2006, TSA officials who manage PARIS began analyzing air carrier inspection results in an effort to assist the agency in evaluating the impact that enforcement actions— including on-site counseling, administrative actions, and civil penalties— have had on ensuring air carrier compliance with TSA security requirements. These officials plan to assess whether there is a relationship between the severity of civil penalties and the reoccurrence of security violations. The analysis that is being conducted by these officials is consistent with our reviews of agency compliance inspection programs, which have cited the need for evaluations of enforcement activities and the effectiveness of using sanctions such as civil penalties to increase compliance. However, while the TSA officials managing PARIS are conducting such analysis of performance information, officials who manage the air carrier inspection program did not intend to use the results of this analysis to develop performance measures or to influence program decisions. According to TSA officials, considering that overall compliance rates are very high among air carriers, and the number of enforcement actions taken by TSA is relatively low, there may not be enough data to conduct meaningful analysis of the impact of enforcement actions. In addition, TSA officials said that they were not convinced that air carrier compliance is influenced by enforcement actions, especially since air carriers are known to intentionally set aside funds when developing their annual budgets in anticipation that they will be fined for some type of security violation during the year. One TSA official stated that air carrier compliance with TSA security requirements is not always within the air carrier’s control and is largely influenced by the security measures in place at the airport, as well as restrictions placed on air carriers by host government laws and regulations. When analyzing the fiscal year 2005 air carrier inspection results, we identified only one instance where noncompliance due to a conflict between TSA requirements and host government law resulted in an inspector requesting that enforcement action be taken against the air carrier. However, TSA chose not to take enforcement action against the air carrier and instead decided to work with the host government to resolve the conflict. Despite the concerns raised by TSA officials, using the analysis of air carrier inspection results to develop performance measures, TSA managers may not be able to identify which approaches for improving air carrier compliance are working well and which approaches could be improved upon. TSA is taking action to address challenges—particularly the lack of available inspectors and various host government concerns—that have limited its ability to conduct foreign airport assessments and air carrier inspections according to schedule. TSA has developed a risk-based approach to scheduling foreign airport assessments, and is in the process of developing a risk-based approach for scheduling air carrier inspections, to enhance the agency’s ability to focus its limited inspector resources on higher-risk airports. The risk-based scheduling approach is also expected to reduce the number of visits TSA conducts at low-risk foreign airports, which may help address some host governments’ concerns regarding the resource burden that results from frequent airport assessments by TSA and others. Harmonization—that is, mutual recognition and acceptance— of TSA, host government, and third party (e.g., European Commission) aviation security standards and assessment and inspection processes may also help TSA address host government concerns regarding resource burden. Specifically, when the opportunity is available, TSA is considering conducting joint assessments with some host governments or third parties, such as the European Commission, which would reduce the number of airport visits experienced by some countries. In addition to addressing concerns regarding the resource burden placed on host governments as a result of frequent airport visits, TSA has taken steps to address some country-specific challenges that have limited TSA’s ability to conduct foreign airport visits. Various challenges have affected TSA’s ability to maintain its schedule of conducting foreign airport assessments and air carrier inspections. The ability to conduct these assessments and inspections as scheduled is important, according to TSA officials, because foreign airport and air carrier compliance with applicable security requirements may deteriorate significantly between assessments. As time between visits increases, the likelihood may also increase that security deficiencies at foreign airports and among air carriers may arise and go undetected and unaddressed. TSA officials also stated that conducting assessments and inspections on a consistent basis helps to ensure that foreign countries continue to comply with ICAO standards and are operating with effective security measures. TSA data show that the agency deferred 90 of the 303 (about 30 percent) foreign airport visits that were scheduled for fiscal year 2005, which include both foreign airport assessments and air carrier inspections. According to TSA, these deferments resulted primarily from a lack of available inspectors to conduct the assessments and inspections. Our analysis identified that the reported shortage of available inspectors reflected the fact that (1) the inspector staff available to conduct the assessments and inspections was less than the number authorized at each of TSA’s five IFOs at some point during fiscal year 2005 and (2) TSA scheduled more foreign airport visits during the fiscal year than available inspectors could complete. TSA officials cited several reasons why the IFOs operated in fiscal year 2005 with fewer inspectors than had been budgeted. First, TSA officials stated that due to State Department limitations on the number of inspectors that can be staffed at IFOs overseas, TSA did not have the budgeted number of inspectors on board to complete assessments and inspections scheduled for fiscal year 2005. Second, TSA officials stated that significant turnover among international inspectors and the subsequent lengthy process for filling vacant inspector positions also contributed to the lack of available inspectors. TSA officials attributed the turnover of international inspectors to various factors, including TSA’s policy that limits the term of international inspectors at overseas IFOs to 4 years, the lack of opportunities for career advancement when stationed at an IFO, and unique difficulties inspectors experience when living and working overseas, such as disruptions to family life. As of January 2007, TSA officials did not have any specific efforts under way to help reduce turnover of international inspectors. Further, TSA officials stated that it takes an average of about 6 months to fill a vacant inspector position, due to the lengthy process for vetting newly hired inspectors. Specifically, once hired, international inspectors must be processed through the State Department, which entails applying for and receiving medical clearances, security clearances, a diplomatic passport, and visas. TSA officials stated that expediting the process of filling vacant positions is largely outside of TSA’s control. However, TSA assigned a headquarters official to oversee this process to identify opportunities for accelerating it. Table 3 shows the number of inspectors budgeted for and available at the IFOs each month during fiscal year 2005. Even if TSA had been operating at its budgeted inspector staffing level, the agency may still have deferred some of the foreign airport assessments and air carrier inspections scheduled for fiscal year 2005 because, according to TSA officials, internal policy required them to schedule more foreign airport visits than the budgeted number of inspectors could reasonably have conducted. According to TSA officials, this internal policy was developed by the Federal Aviation Administration, which was responsible for conducting foreign airport assessments and air carrier inspections prior to TSA. TSA officials also stated that the Federal Aviation Administration had more available inspectors to conduct assessments and inspections than TSA. TSA officials stated that each international inspector should reasonably be able to conduct between 8 and 12 foreign airport visits per year, depending on the amount of time inspectors remain on site to assist foreign officials and air carrier representatives in addressing security deficiencies that are identified during assessments and inspections. However, according to data provided by TSA, each of the 5 IFOs scheduled more than 12 foreign airport visits per inspector for fiscal year 2005. Table 4 shows the average number of foreign airport visits scheduled per international inspector for fiscal year 2005. TSA officials acknowledged that for fiscal year 2005 they scheduled more foreign airport visits than the budgeted level of inspectors could have reasonably conducted. However, TSA has taken steps to compensate for the shortage of international inspectors by utilizing domestic inspectors to help complete the foreign airport assessments and air carrier inspections that were scheduled for fiscal year 2005. Specifically, domestic inspectors were used to assist with about 34 percent of foreign airport assessments and about 35 percent of air carrier inspections. However, despite the use of domestic inspectors, TSA still had to defer foreign airport assessments and air carrier inspections. TSA headquarters officials and IFO staff further stated that the heavy reliance on domestic inspectors to conduct foreign airport assessments and air carrier inspections is not desirable because domestic inspectors lack experience conducting assessments using ICAO standards or inspecting foreign operations of air carriers, as well as working in the international environment. Additionally, using domestic inspectors sometimes presents challenges in planning and coordinating foreign airport visits. Specifically, it can be difficult to obtain clearance from the State Department and host government to allow domestic inspectors to enter foreign countries because TSA may not always be able to provide sufficient notice that domestic inspectors will be participating in airport visits, particularly when the need for a domestic inspector is determined on short notice. Moreover, according to TSA officials, the availability of domestic inspectors may change unexpectedly when they are needed to remain in the United States. TSA officials also said that domestic inspectors may not be available for the entire 4-week period that it takes to prepare for, conduct, and write reports for foreign airport assessments and air carrier inspections. Last, TSA officials stated that compared to international inspectors, some domestic inspectors are not effective at taking notes while conducting observations at foreign airports, nor are some domestic inspectors effective at preparing foreign airport reports—specifically, their word choices for describing security conditions at airports are not always sensitive to the concerns of foreign officials. According to TSA officials, if foreign officials take offense at the way in which TSA portrays the security deficiencies at their airports, foreign officials may no longer allow TSA to conduct airport assessments in their countries. TSA officials stated that they enhanced the notetaking module for the training provided to personnel conducting assessments and inspections overseas. However, for the reasons discussed above, TSA international officials plan to lessen their reliance on domestic inspectors. Risk-Based Approach A risk-based approach entails consideration of terrorist threats, vulnerability of potential terrorist targets to those threats, and the consequences of those threats being carried out when deciding how to allocate resources to defend against these threats. Risk-based, priority-driven decisions can help inform decision makers in allocating finite resources to the areas of greatest need. During October 2006, TSA began implementing a risk-based approach to scheduling foreign airport assessments in order to focus its limited inspector resources on higher- risk airports. Another potential benefit to TSA’s new approach is that it may allow TSA to reduce its reliance on domestic inspectors. The objectives of TSA’s risk-based scheduling approach are to (1) determine the appropriate frequency of foreign airport visits, and (2) identify the appropriate number of inspectors needed for each IFO based on the deployment availability of inspectors, the risk- based priority of each location, and the number of visits required each year. Under the risk-based approach, when fully implemented, foreign airports are categorized based on risk level, and depending on the category in which they are placed, are scheduled to be assessed once a year, once every 2 years, or once every 3 years. According to information provided by TSA, under this approach, the number of foreign airport assessments scheduled each year will decrease by about 38 percent (from 170 to 105 assessments). TSA officials stated that the reduction in the number of annual foreign airport assessments will help enable inspectors to complete foreign airport assessments according to schedule. Based on our analysis, TSA’s risk-based approach for scheduling foreign airport assessments is consistent with generally accepted risk management principles. While it appears that this risk-based approach will reduce the number of foreign airport assessments international inspectors are expected to conduct in a year, it is too soon to determine the impact of this approach on TSA’s ability to complete scheduled foreign airport visits—including assessments and inspections—for two key reasons. First, TSA has not yet finalized its risk-based approach to scheduling air carrier inspections. In February 2007, TSA officials stated that the draft version of the risk-based approach to scheduling air carrier inspections was being vetted through the agency, but they do not expect the final version to be approved until spring 2007. TSA officials stated that in developing the risk-based approach for scheduling air carrier inspections, they determined that, unlike the situation with airports, using previous inspection results was not the best way to determine air carrier vulnerability. Rather, TSA officials expect to use foreign airport assessment results to determine the vulnerability of air carriers operating out of those airports, especially considering that the security status of foreign airports influences TSA’s decision to impose additional security requirements on air carriers operating out of foreign airports. In addition, it is uncertain how TSA’s upcoming audits of foreign repair stations will affect the workload of international inspectors. In December 2003, Congress passed the Vision 100—Century of Aviation Reauthorization Act (Vision 100), which mandated that TSA issue regulations to ensure the security of foreign and domestic repair stations and, in coordination with the Federal Aviation Administration (FAA), complete a security review and audit of foreign repair stations certified by FAA within 18 months of issuing the regulations. Currently, there are approximately 665 FAA-certified repair stations in foreign countries that TSA is required to audit. Of these, 93 are deemed substantial with regard to safety and security in that they perform work on the airframe, flight controls, or propulsion systems. In addition, another 38 are located in countries that, pursuant to Vision 100, TSA and FAA must give priority to because they have been identified as posing the most significant security risks. TSA plans to initiate security audits of the repair stations during fiscal year 2007. Specifically, TSA expects to conduct 127 audits of foreign repair stations during the initial year, focusing on those located in high- threat areas. According to TSA, the majority of repair stations deemed substantial (65 of 93)—are located on or near foreign airports already subject to assessment by TSA. TSA expects that it will take inspectors 3 days to complete initial audits if the foreign repair stations are collocated with foreign airports being assessed, and 5 days to complete for stations which are not collocated. According to TSA, the agency’s fiscal year 2006 funding levels were sufficient to allow for an additional 13 international inspector positions, including a program manager position, to supplement its current international inspector staff and help meet the requirement to conduct foreign repair station security audits. As of January 2007, all 13 positions were filled, but TSA had not yet begun to conduct these audits. Therefore, it is not yet known how these audits and additional inspector positions will actually affect overall inspector workload or TSA’s ability to complete its foreign assessments and inspections as scheduled. Harmonization of TSA, host government, and third party (e.g., European Commission) security standards and the processes used to assess foreign airports and air carriers would address concerns regarding the resource burden placed on host governments as a result of frequent airport visits conducted by TSA and others. Officials from 3 of the 7 foreign countries we visited in March 2006, as well as officials representing the European Commission—the executive arm of the European Union (which is composed of 27 countries), stated that the frequency of airport assessments and air carrier inspections conducted by TSA and others had placed a significant resource burden on the host government. In addition, a representative of the Association of European Airlines and IATA stated that frequent security inspections by TSA, the host government, and other countries, as well as safety inspections, including inspections conducted by FAA, burdened the limited personnel resources available to air carriers. Specifically, for each inspection, the air carrier must assign one of its employees to escort the inspection team around the airport. (In general, TSA officials must be accompanied by host government officials when conducting foreign airport assessments and air carrier inspections because TSA officials are not allowed to enter restricted areas of the airport unescorted.) Belgian officials, for example, proposed to shorten TSA’s fiscal year 2006 assessment of the airport in Brussels, stating that being assessed by TSA, as well as ICAO, the European Commission, and the European Civil Aviation Conference within a short span of time would pose a significant resource burden on the Belgian aviation security department. Host government officials in Germany raised concerns regarding the resource burden placed on their aviation security department due to the frequency of TSA visits. German officials said that TSA scheduled 10 airport visits between January 2006 and September 2006, which German officials viewed as excessive. In addition to individual European countries, the Director of Security for the Protection of Persons, Goods, and Installations for the European Commission’s Directorate General of Transport and Energy wrote a letter to the TSA Assistant Secretary dated March 9, 2006, expressing concern about the frequency of TSA airport assessments and air carrier inspections in Europe. The Director suggested that TSA consider the high level of quality control exercised within the European Union by the European Commission as well as the European Union member states when determining the frequency of airport assessment visits and that TSA and the European Commission embark upon a joint effort to improve coordination of airport visits to alleviate the resource burden placed on member states. TSA’s risk-based approach for scheduling foreign airport assessments could help address some host governments’ concerns regarding the resource burden placed on them in part due to the frequency of airport assessments conducted by TSA. In addition to implementing a risk-based approach to scheduling, there are other potential opportunities for TSA to address host country concerns regarding the resource burden experienced as a result of frequent airport visits. Industry representatives and some host government officials stated that if TSA and other inspecting entities either conducted joint airport assessments and air carrier inspections or used the results of each other’s assessments and inspections in lieu of conducting their own, the frequency of airport visits could be reduced, in turn reducing the resource burden placed on host governments and air carriers. Airports Council International officials we interviewed, who represent airport operators worldwide, stated that if TSA and other inspecting entities were to conduct joint assessments, the resource burden experienced by airport operators would also be reduced. Moreover, officials from 2 of the 7 countries we visited suggested that TSA review the results of airport assessments conducted by the host government or by third parties either in lieu of conducting its own airport assessments or to target its assessments on specific security standards. These officials said that by doing this, TSA could reduce the length of the assessment period, thereby reducing the resource burden placed on host government officials. According to TSA, the agency must physically observe security operations at foreign airports to determine whether airports are maintaining and carrying out effective security measures in order to satisfy its statutory mandate to conduct assessments of foreign airports. This interpretation precludes TSA from relying solely on third party or host government assessments to make this determination. However, TSA officials stated that they may be able to use host government or third party assessments— provided that foreign officials make these assessments available to TSA— to help refine the agency’s risk-based approach to scheduling foreign airport assessments, such that TSA would be able to focus its limited inspection resources on foreign airports that pose the greatest security risk to the United States. For example, instead of visiting a foreign airport that TSA considers low risk once every 3 years, TSA, hypothetically, could visit such airports once every 5 years, and review third party or host government assessments between visits to help determine whether the airport is maintaining and carrying out effective security measures. This would enable TSA to reduce the number of visits to foreign airports, thus addressing host government officials’ concerns regarding the resource burden they experience as a result of frequent airport assessments. However, three of the five IFO managers we interviewed said that the option of using host government assessments is not currently available to them because host governments in their areas of responsibility generally do not have airport assessment programs in place. These IFO managers said that even if host governments had assessment programs in place, they would be cautious about using the assessment reports and conducting joint assessments for one of two reasons: (1) TSA has not independently evaluated the quality of the assessments conducted by host governments and third parties or the quality of the inspectors conducting these assessments, and (2) host governments and third party inspectors base their assessments on different aviation security standards than TSA. Similarly, foreign government officials and industry representatives have cited differences in security standards as an impediment to conducting joint assessments and using host government or third party assessments. Harmonization In the homeland security context, “harmonization” is a broad term used to describe countries’ efforts to coordinate their security practices to enhance security and increase efficiency by avoiding duplication of effort. Harmonization efforts can include countries’ mutually recognizing and accepting each other’s existing practices— which could represent somewhat different approaches to achieve the same outcome, as well as working to develop uniform standards. TSA headquarters officials stated that harmonization of airport and air carrier security standards and airport assessment and air carrier inspection processes would make them less cautious about using other assessment reports and conducting joint assessments. To this end, TSA has taken steps toward harmonizing airport assessment processes and some airport and air carrier security standards with the European Commission. In May 2006, in responding to the European Commission’s concerns regarding the frequency of TSA airport assessments and air carrier inspections in Europe, the TSA Assistant Secretary suggested that TSA and the European Commission develop working groups to address these concerns. Further, in June 2006, TSA initiated efforts with the European Commission that will enable each party to learn more about the other party’s quality control programs. As part of these efforts, TSA and the European Commission established six working groups. TSA and the European Commission have not established firm time frames for when the working groups are to complete their efforts. The objectives and the status of the working groups are described in table 5. In December 2006, the TSA Assistant Secretary stated that the agency had primarily coordinated with the European Commission on harmonizing aviation security standards because airports in the European Union generally have a high level of security. The Assistant Secretary further stated that TSA should not focus its inspector resources on foreign airports that are known to have a high level of security, such as several European airports; rather, TSA should focus its limited resources on foreign airports that are known to be less secure. The Assistant Secretary added that a number of options for better leveraging inspector resources are being considered by one of the European Commission-TSA working groups, including scheduling European Commission and TSA assessments to overlap for 1 or 2 days to enable both parties to share their assessment results, which could enable TSA to shorten the length of its assessments. The Assistant Secretary also stated that TSA could eventually recognize European Commission airport assessments as equivalent to those conducted by TSA and have TSA inspectors shadow European Commission assessment teams to periodically validate the results. However, in January 2007, European Union member states reached consensus that they would not share the results of European Commission assessments of their airports with TSA until the following occur: (1) TSA and the European Commission agree upon protocols for sharing sensitive security information; (2) TSA inspectors shadow European Commission inspectors on an assessment of a European airport, and European Commission inspectors shadow TSA inspectors on an assessment of a U.S. airport; and (3) TSA agrees to provide the European Commission with the results of U.S. airport assessments. TSA and European Commission officials stated that they expect information-sharing protocols to be established and shadowing of airport assessments to take place during spring 2007. TSA officials also stated that once the information-sharing protocols are finalized, they would be willing to provide European Union member states with the results of U.S. airport assessments. Aviation industry representatives stated that in addition to facilitating joint assessments and use of third party assessments, harmonization of aviation security standards between countries would enhance the efficiency and effectiveness of international aviation security efforts. For example, IATA representatives we interviewed stated that they have met with TSA officials about harmonizing the list of items prohibited onboard aircraft with the European Commission. IATA officials stated that having different security requirements to follow for different countries leads to confusion, and perhaps noncompliance with security requirements, among air carriers. The Chairman of the Security Committee for the Association of European Airlines stated that there are numerous redundancies in the international aviation security system that could be reduced through harmonization, particularly with regard to screening transfer passengers— passengers who have a layover en route from their originating airport to their destination airport. For example, for a passenger traveling from Frankfurt to Chicago who has to change planes in New York, upon landing in New York, the passenger must be rescreened and have his or her checked baggage rescreened before boarding the flight for Chicago. According to officials from various air carrier and airport operator associations, the rescreening of transfer passengers is costly and is only required because individual countries do not formally recognize each other’s aviation security measures as providing an equivalent level of security. Air carrier representatives also stated that because air carriers must use their limited resources to implement redundant security measures, they are not able to focus their resources on implementing other security measures that may be more effective at preventing a terrorist from carrying out an attack. The TSA Assistant Secretary agreed that rescreening transfer passengers that originate from airports that have a high level of security may be unnecessarily redundant. The Assistant Secretary said that TSA plans to assess the effectiveness of the checked baggage screening system commonly used at European airports to determine if that system provides at least the same level of security as TSA’s baggage screening system. However, TSA officials said that even if the agency determines that the baggage screening system in place at European airports provides an equivalent level of security, TSA would still have to rescreen checked baggage for transfer passengers arriving from Europe because the Aviation and Transportation Security Act requires passengers and baggage on flights originating in the United States to be screened by U.S. government employees. According to an attorney in TSA’s Office of Chief Counsel, Congress would have to change the law in order for TSA to discontinue the screening of transfer passengers. TSA also made efforts to harmonize some aviation security measures with other countries outside of the European Union. For example, TSA officials worked with Canadian officials to develop a common set of security requirements for air carriers that have flights between the United States and Canada. Additionally, in response to the alleged August 2006 liquid explosives terrorist plot, TSA initially banned all liquids, gels, and aerosols from being carried through the checkpoint and, in September 2006, began allowing passengers to carry on small, travel-size liquids and gels (3 fluid ounces or less) using a single quart-size, clear plastic, zip-top bag. In an effort to harmonize its liquid screening procedures with other countries, in November 2006, TSA revised its procedures to allow 3.4 fluid ounces of liquids, gels, and aerosols onboard aircraft, which is equivalent to 100 milliliters—the amount permitted by the 27 countries in the European Union, as well as Canada, Australia, Norway, Switzerland, and Iceland. According to the Assistant Secretary of TSA, this means that approximately half of the world’s air travelers will be governed by similar measures with regard to this area of security. ICAO also adopted the liquid, gels, and aerosol screening procedures implemented by TSA and others as a recommended practice. As we reported in March 2007, DHS has also taken steps toward harmonizing international air cargo security practices. As part of this effort, TSA has worked through ICAO to develop uniform international air cargo security standards. In addition to concerns regarding the resource burden placed on host governments as a result of frequent airport visits by TSA and others, TSA, on a case-by-case basis, has also had to address host government concerns regarding sovereignty—more specifically, concerns that TSA assessments and inspections infringe upon a host government’s authority to regulate airports and air carriers within its borders. According to TSA officials and representatives of the European Commission, several foreign governments have stated that they consider TSA’s foreign airport assessments as an infringement on their sovereignty. For example, government officials in one country have prevented TSA from assessing the security at their airports and from inspecting non-U.S. air carriers because they do not believe TSA has the authority to assess airports outside of the United States and that the host government is the sole regulator of air carriers that are based out of their country. Based on the results of air carrier inspections provided to us by TSA, we found that during fiscal year 2005, TSA conducted only one inspection of an air carrier that was based out of this particular country. According to TSA, officials from this country allowed TSA to conduct this particular inspection to accommodate TSA’s request to inspect the security of air carriers that had flights originating in Europe and arriving in Washington, D.C., during the January 2005 U.S. presidential inauguration activities. We also found that TSA conducted assessments of four airports in this particular country during fiscal year 2005. TSA officials said that they were able to conduct these assessments under the guise of a TSA “visit” to—versus an “assessment” of—the airport. TSA officials, however, stated that because officials from this country do not believe TSA has the authority to assess the security at their airports, these officials would not accept—neither orally nor in writing— the results of TSA airport assessments. TSA officials also stated that officials from this country prohibited TSA inspectors from assessing airport perimeter security as well as the contents of the country’s individual airport security programs. TSA officials identified that there are at least 3 additional countries that raised concerns regarding sovereignty. According to TSA, officials from one of these countries stated that they did not know of any international requirements compelling them to allow TSA to assess their airport and that TSA had too many internal flaws to assess airports in other countries. In response to this country’s concerns, TSA sent a representative to meet with the country’s Minister of Transportation. At the meeting, the Minister granted TSA future access to the country’s airports for assessments after being offered the opportunity to visit U.S. airports to observe security measures. Additional countries, according to TSA, were concerned about their sovereignty being violated and TSA gathering intelligence information for the U.S. government through the airport assessment program. TSA officials stated that when unique concerns arise in the future, they will continue to work with countries on a case-by-case basis to try to address their concerns. The alleged August 2006 terrorist plot to detonate liquid explosives on U.S.-bound flights from the United Kingdom illustrates the continuing threat of international terrorism to commercial aviation and the importance of TSA’s foreign airport assessment and air carrier inspection programs. As part of these programs, TSA has provided on-site consultation and made recommendations to foreign officials on how to resolve security deficiencies. In rare cases, DHS and TSA have taken more aggressive action by notifying the traveling public that an airport does not meet minimum international standards or issuing warning letters and letters of correction to air carriers. While foreign government officials and air carrier representatives acknowledged that TSA’s efforts have helped to strengthen the security of U.S.-bound flights, there are several opportunities for TSA to strengthen oversight of its foreign airport assessment and air carrier inspection programs. First, although TSA has made some efforts to improve its tracking of foreign airport assessments and air carrier inspections, until additional controls are in place to track the status of foreign airport assessments and air carrier inspections, such as whether scheduled assessments and inspections were actually conducted, TSA has limited assurance that all assessments and inspections are accounted for and that appropriate action was taken for airports and air carriers that did not comply with security standards. Second, while TSA has helped to strengthen security at foreign airports by providing assistance to foreign officials, because TSA does not consistently track and document foreign officials’ progress in addressing security deficiencies, it may be difficult for TSA to assess the impact of its efforts on meeting program goals—to ensure that foreign airports and air carriers servicing the United States are meeting, at a minimum, applicable ICAO standards and TSA’s security requirements, respectively. Third, although TSA has established some output performance measures and targets related to the assessment and inspection programs, the current measures do not enable TSA to draw particularly meaningful conclusions about the impact of its foreign airport assessment and air carrier inspection programs on the security of U.S.-bound flights and how to most effectively direct its improvement efforts. TSA has faced several challenges in meeting the goals of its assessment and inspection programs, including a lack of available staff and concerns regarding the resource burden placed on host governments as a result of frequent airport visits conducted by TSA and others. TSA’s development of a risk-based approach to scheduling airport assessments and air carrier inspections is a step in the right direction to address host government concerns and better leverage limited inspector resources. However, it is too soon to determine the extent to which the risk-based approach will help to improve TSA’s ability to complete scheduled foreign airport assessments and air carrier inspections, and the extent to which the approach will alleviate host government concerns regarding the frequency of airport visits. The collaboration between TSA and the European Commission regarding opportunities for conducting joint airport assessments and sharing assessment results, as well as efforts to harmonize aviation security standards—including those related to the screening of liquids, gels, and aerosols—with the European Commission and others, are key steps toward addressing host government concerns regarding the resource burden that results from frequent assessments by TSA and others. It will be important for TSA to continue working with foreign officials to address their concerns, such as sovereignty issues, in order to continue assessing the security at foreign airports that service the United States. To help strengthen oversight of TSA’s foreign airport assessment and air carrier inspection programs, in our April 2007 report that contained sensitive security information, we recommended that the Secretary of the Department of Homeland Security direct the Assistant Secretary for the Transportation Security Administration to take the following five actions: develop controls to track the status of scheduled foreign airport assessments from initiation through completion, including the reasons why assessments were deferred or canceled; develop controls to track the status of scheduled air carrier inspections from initiation through completion, including the reasons why inspections were deferred or canceled, as well as the final disposition of any investigations that result from air carrier inspections; develop a standard process for tracking and documenting host governments’ progress in addressing security deficiencies identified during TSA airport assessments; develop outcome-oriented performance measures to evaluate the impact TSA assistance has on improving foreign airport compliance with ICAO standards; and develop outcome-oriented performance measures to evaluate the impact TSA assistance and enforcement actions have on improving air carrier compliance with TSA security requirements. On April 13, 2007, we received written comments on the draft report, which are reproduced in full in appendix V. DHS generally concurred with the findings and recommendations in the report and stated that the recommendations will help strengthen TSA’s oversight of foreign airport assessments and air carrier inspections. With regard to our recommendations that TSA develop controls to track the status of scheduled airport assessments and air carrier inspections from initiation through completion, including the reasons for any deferments or cancellations, and the final disposition of investigations related to air carrier inspections, DHS stated that TSA plans to enhance its tracking system to include the reason for any deferment or cancellation of an airport assessment or an air carrier inspection. The tracking system also incorporates the risk-based methodology and criteria for scheduling foreign airport assessments that TSA adopted in October 2006. Enhancing the tracking system should provide TSA greater assurance that airport assessments and air carrier inspections are conducted within applicable time frames. If properly implemented and monitored, this tracking system should address the intent of our recommendation. Regarding the disposition of investigations related to air carrier inspections, DHS stated that TSA’s Office of Chief Counsel currently documents the final disposition of investigations in PARIS, but TSA will enhance PARIS to ensure that inspection activities are linked to investigations so that comprehensive enforcement information is readily available. A clear link between violations identified as a result of an inspection and the final disposition of those violations is important for maintaining comprehensive inspection and enforcement information. As we reported, TSA often pursued one enforcement action in response to multiple violations, and inspectors were not required to identify which violations were included in the enforcement action. Without being able to readily identify what enforcement action was taken in response to specific security violations, TSA cannot readily ensure that air carriers receive appropriate penalties, and that security violations are resolved. Concerning our recommendation that TSA develop a standard process for tracking and documenting host governments’ progress in addressing security deficiencies identified during assessments, TSA stated that it is currently developing a system whereby outstanding deficiencies identified during an assessment will be tracked along with deficiency-specific information, deadlines, and current status. TSA plans to archive this information for future trend analysis and to provide a historical understanding of each airport’s security posture. This effort, if properly implemented, will provide additional relevant, useful information to TSA in performing its oversight responsibilities. TSA concurred with our recommendation that it develop outcome- oriented performance measures to evaluate the impact TSA assistance has on improving foreign airport compliance with international security standards, and on improving air carrier compliance with TSA security requirements. TSA is considering several elements to include in the performance measures, such as the number of assessments conducted, corrective actions recommended, TSA assistance provided, and corrective actions achieved. TSA indicated that its outcome-based performance measures would be structured to recognize the collaborative nature of the process, particularly where corrective action by a foreign government is concerned. Such outcome-based performance measures, if properly developed and utilized, will enable TSA to determine the impact of its airport assessment program and assistance provided for improving security at foreign airports. Likewise, these types of measures can be applied to air carrier inspections at foreign airports to determine he impact that such inspections have on compliance, and to identify which approaches to for improving air carrier compliance with security requirements work well and which could be improved upon. If you or your staff have any questions about this report, please contact me at (202) 512-3404 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix VI. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. To examine efforts by the Transportation Security Administration (TSA) to ensure the security of international aviation, and in particular flights bound for the United States from other countries, we addressed the following questions: (1) What were the results of TSA’s fiscal year 2005 foreign airport assessments and air carrier inspections, and what actions were taken, if any, when TSA identified that foreign airports and air carriers were not complying with security standards? (2) How, if at all, did TSA assist foreign countries and air carriers in addressing any deficiencies identified during foreign airport assessments and air carrier inspections, and to what extent did TSA provide oversight of its assessment and inspection efforts? (3) What challenges, if any, affected TSA’s ability to conduct foreign airport assessments and air carrier inspections, and what actions have TSA and others taken to address these challenges? To determine the results of TSA’s foreign airport assessments we reviewed 128 fiscal year 2005 assessment reports, the most recent year for which complete foreign airport assessment reports were available. To determine the extent to which foreign airports complied with International Civil Aviation Organization (ICAO) standards and recommended practices, we looked at the following information contained in the reports: (1) ICAO standards or recommended practices with which the airport did not comply; (2) whether issues of noncompliance were “old” (identified during the previous assessment) or “new” (identified during the current assessment); (3) explanation of the problems that existed that caused the airport not to comply with ICAO standards or recommended practices, and, if provided, any actions taken by the host government to address the problems; (4) TSA’s recommendations for how the airport could correct security deficiencies in order to meet ICAO standards or recommended practices; and (5) whether issues of noncompliance remained “open” (unresolved) or “closed” (resolved) prior to the completion of the assessment. We developed an electronic data collection instrument to capture information from copies of the assessment reports. All data collection instrument entries, with the exception of the problem descriptions and recommendations, were verified to ensure they had been copied correctly from the assessment reports. Considering that we only intended to discuss the problem descriptions and the recommendations anecdotally, and given the resources available to verify this information, we verified that the problem descriptions and recommendations had been copied correctly for a random sample of 20 assessment reports from fiscal year 2005. We analyzed the data to determine the frequency with which foreign airports complied with particular categories of ICAO standards and recommended practices, such as passenger screening, checked baggage screening, access controls, etc., and the number of airports that resolved deficiencies upon completion of the assessment. To determine the results of TSA’s air carrier inspections, we obtained inspection data from TSA’s Performance and Results Information System (PARIS). For the purposes of our review, we analyzed the results of inspections conducted in fiscal year 2005 to be consistent with the analysis performed on the results of foreign airport assessments for fiscal year 2005. TSA’s inspections database contained information on 529 air carrier inspections at 145 foreign airports in 71 countries conducted by TSA during fiscal year 2005. Specifically, the inspections database included the date and location of the inspection, the inspected air carrier, the security requirements being inspected, as well as the inspector’s determination as to whether the air carrier was or was not in compliance with security requirements. Prior to conducting any analysis, we assessed the reliability of the inspection data by performing electronic testing for obvious errors in accuracy and completeness. Our testing revealed a few errors, such as inconsistencies in the names of individual air carriers or incorrectly identifying the airport as the assessed entity rather than the air carrier. We also found instances of inspections conducted at domestic airports that were included in the data; those inspection records were removed. We also interviewed agency officials familiar with the data, and worked with them to resolve the data problems we identified. Based on our electronic testing and discussions with agency officials, we found the data to be sufficiently reliable for the purposes of our report. For our analysis, we also added additional information to the inspection records to include the country where the inspection occurred, and whether the air carrier being inspected was a U.S.-based air carrier or a foreign air carrier. Finally, to facilitate our analysis, we grouped the security requirements being inspected into several categories, such as aircraft security, cargo, checked baggage, passenger and carry-on screening and special procedures. To determine the actions taken by TSA when foreign airports did not comply with ICAO standards and recommended practices, we reviewed TSA’s Foreign Airport Assessment Program Standard Operating Procedures (SOP). We also reviewed relevant statutory provisions that identify specific actions to be taken by the Secretary of Homeland Security when the Secretary determines that a foreign airport does not maintain and carry-out effective security measures. To determine the actions taken by TSA when air carriers did not comply with TSA security requirements, we reviewed fiscal year 2005 information from the findings and investigations databases in PARIS. As with the inspection data, to facilitate our analysis, we included additional information in the findings database, such as the country where the inspection occurred, and whether the air carrier being inspected was a U.S.-based air carrier or a foreign air carrier. Further, we grouped the security requirements being inspected into several categories, such as aircraft security, cargo, checked baggage, passenger and carry-on screening, and special procedures. To assess the reliability of the findings data, we performed electronic testing for obvious errors and completeness and interviewed agency officials knowledgeable about the data. We identified two issues of concern during our reliability assessment. First, we found that the findings database is not linked to the inspections database to allow for ready determination of the actions taken by TSA in response to specific deficiencies. Second, the findings database did not consistently include accurate information on actions taken in response to findings. According to TSA officials knowledgeable about the data, the findings database should contain information on actions taken by TSA for each response of “not in compliance” in the inspections database. However, we found that in half of the inspections where deficiencies were identified, such information was not properly recorded in the findings database. Considering the amount of information excluded from the findings database and that this information could not be readily provided by TSA, we determined that the findings data were not sufficiently reliable for conducting evaluative analysis of the actions taken by TSA when security violations were identified during air carrier inspections. However, we determined that the findings data were sufficiently reliable for conducting descriptive analysis of TSA’s actions, while including appropriate statements as to its reliability, and for anecdotal purposes. To assess the reliability of the investigations data included in PARIS, we conducted electronic testing and interviewed agency officials knowledgeable about the data. We found that information in the investigations database is not recorded in such a way that one can readily determine which air carrier inspection, and in particular which specific security violations identified, were the impetus behind a particular investigation. TSA officials explained that inspectors are not required to link an investigation to the inspection which it stemmed from. When we performed our analysis, TSA officials were, however, able to provide links to inspections for some of the investigations. For the remainder of the investigations data, we attempted to make the link between inspections and investigations by using information from the inspections database such as the date when the investigation record was created and the narrative fields, which in some cases identified whether the investigation was a result of an inspection or some other offense, such as an air carrier allowing a passenger on the No-Fly list to board a U.S.-bound flight. Our analysis of actions taken by TSA when air carriers did not comply with security requirements is, therefore, based on those investigations that we were able to link to fiscal year 2005 inspection activity. We found these data to be sufficiently reliable for purposes of this report. For additional information on actions taken by TSA when foreign airports and air carriers did not comply with security requirements, we interviewed TSA headquarters and field officials in the Office of Security Operations— the division responsible for conducting foreign airport assessments and air carrier inspections and making recommendations for corrective action— and the Transportation Security Network Management division—the unit responsible for working with foreign officials to coordinate TSA foreign airport visits and monitoring host government and air carrier progress in addressing security deficiencies. To identify actions taken by TSA to help foreign officials address security deficiencies identified at foreign airports during the fiscal year 2005 airport assessments, we obtained and analyzed information from the fiscal year 2005 foreign airport assessment reports. To obtain information on TSA’s efforts to assist air carrier representatives in addressing identified security deficiencies, we reviewed information in the findings and investigations databases from TSA’s PARIS. As previously discussed, we assessed the reliability of the findings and investigations data by performing electronic testing for obvious errors in accuracy and completeness, and interviewed agency officials knowledgeable about the data. While we identified errors during our reliability assessment, many of which remained unresolved, we determined that the findings and investigations data were sufficiently reliable for anecdotal descriptions of the assistance TSA provided air carriers to help them address security deficiencies. To obtain additional information on actions taken by TSA to address security deficiencies identified during foreign airport assessments and air carrier inspections, we interviewed TSA headquarters officials from the Office of Security Operations and the Transportation Sector Network Management division. We also made site visits to TSA’s five international field offices (IFO) located in Los Angeles, Dallas, Miami, Frankfurt, and Singapore, where we met with the IFO managers; international aviation security inspectors, who conduct foreign airport assessments and air carrier inspections; 10 of the 20 TSA Representatives (TSAR), who schedule TSA airport visits and follow up on host governments’ progress in addressing security deficiencies; and 4 of the 6 International Principal Security Inspectors (IPSI), who are responsible for assisting foreign air carriers in understanding and complying with TSA security requirements. We also met with 3 of the 15 Principal Security Inspectors (PSI) located at TSA headquarters that are responsible for helping U.S. air carriers understand and comply with TSA security requirements. During each of these interviews, we discussed these officials’ responsibilities related to the foreign airport assessment and air carrier inspection programs, including their role in assisting foreign officials and air carrier representatives in correcting security deficiencies identified during assessments and inspections. Information from our interviews with government officials, members of the aviation industry, and TSA officials and inspectors cannot be generalized beyond those that we spoke with because we did not use statistical sampling techniques in selecting individuals to interview. To obtain a greater understanding of the foreign airport assessment and air carrier inspection processes, as well as the assistance TSA provides, we accompanied a team of TSA inspectors and a TSAR during the assessment of E.T. Joshua International Airport in Kingstown, St. Vincent and the Grenadines, and the inspection of Caribbean Sun Airlines at that location. Moreover, we identified and met with officials from other U.S. government agencies that assist foreign officials in enhancing security at foreign airports. Specifically, we met with officials from the Department of Justice, Department of State, Department of Transportation, and the U.S. Trade and Development Administration. To obtain information on the extent to which TSA provided oversight of its assessment and inspection efforts, we reviewed the agency guidance for each program. We also reviewed sections of the fiscal year 2005 foreign airport assessment reports for completeness and general consistency with TSA guidance for preparing assessment reports. In addition, we reviewed the inspections, findings, and investigations databases in PARIS for completeness and the ability to track air carrier inspection activity from initiation through completion, including actions taken against air carriers who did not comply with security requirements. We compared TSA’s guidance and reporting mechanisms for the assessment and inspection programs with federal standards for internal controls and associated guidance. We also met with TSA headquarters officials, IFO managers, TSARs, and aviation security inspectors to discuss the extent to which they documented assessment and inspection activity from initiation through completion and follow-up activity for unresolved security deficiencies. We obtained additional information on TSA’s oversight of the foreign airport assessment and air carrier inspection programs, particularly with regard to assessing the impact of these programs, by reviewing TSA’s fiscal year 2006 Performance Assessment Rating Tool (PART) submissions. The Office of Management and Budget describes PART as a diagnostic tool meant to provide a consistent approach to evaluating federal programs as part of the executive budget formulation process. PART includes information on an agency’s program goals and performance measures used to assess whether program goals are being met. We compared the program goals identified in TSA’s PART submission with the Government Performance and Results Act of 1993 (GPRA), which identifies requirements for the types of measures federal agencies should use to assess the performance of their programs. We also interviewed TSA headquarters and field officials to obtain their perspectives on appropriate ways to assess the performance of the foreign airport assessment and air carrier inspection programs. To identify challenges that affected TSA’s ability to conduct foreign airport assessments and air carrier inspections at foreign airports, we met with TSA headquarters and field officials in the Office of Security Operations and the Transport Sector Network Management division regarding their efforts to obtain access to foreign airports to conduct assessments and inspections. We also visited the embassies of 16 nations and the Delegation of the European Commission in Washington, D.C., to obtain perspectives of foreign transportation security officials on TSA’s airport assessment and air carrier inspection program. In addition, we conducted site visits to meet with aviation security officials in Belgium, Canada, Germany, the Philippines, St. Vincent and the Grenadines, Thailand, and the United Kingdom to discuss their perspectives on TSA’s foreign airport assessment and air carrier inspection activity. We selected these locations because they met one or more of the following criteria: a relatively high volume of passengers fly to the United States each year, TSA assigned a relatively high threat ranking to the country, the country received aviation security training or technical assistance from a U.S. government agency, or a TSA international field office was located in the country. We also met with individuals representing 11 air carriers, including both U.S. and foreign airlines, to obtain their perspectives on TSA’s foreign airport assessment and air carrier inspections programs. Additionally, we met with officials from the European Commission, the European Civil Aviation Commission, and ICAO to discuss similar efforts these organizations have in place to ensure compliance with international aviation security standards. Information from our interviews with foreign government officials and members of the aviation industry cannot be generalized beyond those that we spoke with because we did not use statistical sampling techniques in selecting individuals to interview. We also reviewed documentation associated with TSA’s risk-based methodology for scheduling foreign airport assessments and air carrier inspections, which TSA intended to address some of the challenges in conducting assessments and inspections, and compared the methodology to our risk management guidance. In addition, we interviewed 4 Federal Security Directors and 7 aviation security inspectors stationed in the United States to discuss their support of the foreign airport assessment and air carrier inspection programs as well as the impact, if any, that their involvement in these programs has had on their operations at U.S. airports. We conducted our work from October 2005 through March 2007 in accordance with generally accepted government auditing standards. The aircraft operator standard security program (AOSSP) is designed to provide for the safety of persons and property traveling on flights against acts of criminal violence and air piracy, and the introduction of explosives, incendiaries, weapons, and other prohibited items on board an aircraft. TSA requires that each air carrier adopt and implement a security program approved by TSA for scheduled passenger and public charter operations at locations within the United States, from the United States to a non-U.S. location, or from a non-U.S. location to the United States, and from a non- U.S. location to a non-U.S. location (for example, an intermediate stop such as Singapore to Tokyo to the United States). The AOSSP developed by TSA and used by U.S.-based carriers is divided into chapters and lays out security requirements for operations. Table 6 summarizes requirements applicable to flights operating from a non-U.S. location to the United States. When TSA determines that additional security measures are necessary to respond to a threat assessment or to a specific threat against civil aviation, TSA may issue a Security Directive setting forth mandatory measures. Each air carrier required to have a TSA-approved security program must comply with each Security Directive issued to it by TSA, within the time frame prescribed in the Security Directive for compliance. TSA requires that the security program of a foreign air carrier provide passengers a level of protection similar to the level of protection provided by U.S. air carriers serving the same airports. The security program must be designed to prevent or deter the carriage onboard airplanes of any prohibited item, prohibit unauthorized access to airplanes, ensure that checked baggage is accepted only by an authorized agent of the air carrier, and ensure the proper handling of cargo and checked baggage to be loaded onto passenger flights. In addition, carriers are requested to provide an acceptable level of security for passengers by developing and implementing procedures to prevent acts of unlawful interference. TSA’s foreign air carrier model security program was prepared to assist foreign airlines in complying with security requirements for operations into and out of the United States. Table 7 summarizes requirements applicable to foreign carriers’ flights operating from a non-U.S. location to the United States. When TSA determines that additional security measures are necessary to respond to an emergency requiring immediate action with respect to safety in air transportation, it may issue an emergency amendment. An emergency amendment mandates additional actions beyond those in the air carrier’s security program. When TSA issues an emergency amendment, it also issues a notice indicating the reasons for the amendment to be adopted. Air carriers are required to comply with emergency amendments immediately. The State Department’s Anti-Terrorism Assistance (ATA) program seeks to provide partner countries the training, equipment, and technology they need to combat terrorism and prosecute terrorists and terrorist supporters. The Anti-Terrorism Assistance program was established in 1983. Countries must meet at least one of the four following criteria to participate in the ATA program: The country or region must be categorized as having a critical or high threat of terrorism and unable to protect U.S. facilities and personnel within the country There are important U.S. policy interests with the prospective country, which may be supported through the provision of antiterrorism assistance. For example, officials in one country received assistance through the ATA program because they allowed the United States to establish air bases in their country. The prospective country must be served by a U.S. air carrier, or is the last point of departure for flights to the United States. The prospective country cannot be engaged in gross human rights violations. The State Department determines whether and what training and assistance to provide countries based on needs assessments done by State Department personnel along with a team of interagency subject matter experts. The assessment team evaluates prospective program participants using 25 Antiterrorism Critical Capabilities. Program officials stated that the assessment is a snapshot of the country’s antiterrorism capabilities, including equipment, personnel, and available training. ATA program officials stated that the assessment includes a review at several levels, including tactical capabilities (people and resources), operational management capabilities (overall management and ability), and strategic capabilities. Two of the 25 capabilities reviewed during the needs assessments are related to aviation security. Those are Airspace Security and Air Port of Entry Security. The first is an assessment of how a country controls what goes through its airspace. The second is an assessment of security at the country’s main airport. According to program officials, when doing an assessment, the ATA team will usually visit the busiest airport within the country to examine the operational security of the airport and assesses training provided to airport security management. The results of the needs assessments determine what type of assistance the State Department will offer to countries participating in the ATA program. The various types of training and assistance offered through the program include crisis management and response, cyber-terrorism, dignitary protection, bomb detection, border control, kidnap intervention and hostage negotiation and rescue, response to incidents involving weapons of mass destruction, counter terrorist finance, interdiction of terrorist organizations, and airport security. During fiscal year 2005, 146 countries received antiterrorism training through the ATA program; 7 countries received training for aviation security. The ATA program offers one course in aviation security, “Airport Security Management.” This is a 1-week seminar that is generally taught in-country. According to State Department officials, TSA employees teach the course. State Department officials stated that this course helps countries to meet internationally recognized aviation security standards established by ICAO. State Department officials stated that while most countries’ officials know about ICAO, and can obtain ICAO manuals and standards, many of the countries do not have the resources or equipment to operationalize ICAO standards. State Department officials stated that the ATA program offers countries the resources to implement ICAO standards. For fiscal year 2005, aviation security training was provided to 7 countries through the ATA program, Philippines ($94,723), Kazakhstan ($98,200), Bahamas ($95,000), Barbados ($45,900), Dominican Republic ($45,900), Qatar ($98,046), and United Arab Emirates ($95,000). TSA employees teach in-country aviation security training to foreign officials through the ATA program. In addition, ATA uses TSA staff as subject matter experts when performing needs assessments. The U.S. Trade and Development Agency (USTDA) works to advance economic development and U.S. commercial interests in developing and middle-income countries. The agency funds various forms of technical assistance, training, and business workshops to support the development of a modem infrastructure and a fair and open trading environment. USTDA’s use of foreign assistance funds to support sound investment policy and decision making in host countries is intended to create an enabling environment for trade, investment, and sustainable economic development. In carrying out its mission, USTDA gives emphasis to economic sectors that may benefit from U.S. exports of goods and services. For example, according to USTDA, the agency obligated approximately 24 percent of its program funding in support of transportation sector projects. More specifically, according to USTDA, 5.6 percent of the agency’s budget is obligated toward projects in the aviation security sector. The general goals of USTDA’s work in the aviation security field are to help foreign airports achieve “Category I” status (the FAA classification for an airport that meets minimum safety standards, which allows foreign air carriers to fly from their country of origin directly to the United States), to help countries prepare to pass and adhere to ICAO standards, and to offer training to increase aviation security. According to USTDA, assistance projects and recipients are selected within the framework of USTDA’s development and commercial mandate. Generally, projects are not selected based strictly on security (i.e., not selected based on threat) but on the likelihood of a country implementing the recommended actions to obtain greater aviation safety and security. USTDA projects are developed through consultations by USTDA staff and U.S. and foreign embassies, foreign officials (public or private) that have decision-making authority to implement the assistance project, or U.S. industry officials that identify a need for assistance. According to USTDA, when developing the project, the agency evaluates a number of factors, including the priority the government places on the project and if the entity has the technical capability to implement the project. According to USTDA, this evaluation is conducted in order to ensure that U.S. taxpayers’ dollars are wisely used on projects that will help strengthen a foreign countries’ ability to transport passengers and goods to the United States. After an initial evaluation by USTDA staff, USTDA employs a technical expert to conduct an independent evaluation of the proposed assistance project. That technical evaluation can take two forms: a Desk Study or a Definitional Mission. The Desk Study is completed for proposals where sufficient information is provided that allows for a technical expert to make an informed decision as to whether or not USTDA should fund the project. If the project proposal does not contain sufficient detail to evaluate without conducting a field site visit, USTDA then employs a small business contractor—or consultant—to conduct a Definitional Mission, which, according to USTDA, costs between $25,000 and $40,000. The consultant undertaking the Definitional Mission takes 1 to 2 weeks to meet with the stakeholders in the foreign country, including the potential grant recipients, in order to review project ideas and generate additional project opportunities. Upon return from the site visits, the consultant prepares a report for USTDA on the findings of the Definitional Mission. According to USTDA, consultants typically assess more than one proposed assistance project at a time when in the field. To avoid conflicts of interest, the consultant that undertakes the Definitional Mission is prohibited from participating in any of the follow-on work, including the early investment analysis or training recommended in the report. Early investment analysis is the main form of USTDA assistance. According to USTDA, the cost of such assistance typically ranges from $100,000 to $500,000. These technical assistance programs may take from 6 to 18 months to complete. The studies are undertaken by U.S. consulting firms under a grant program and are intended to evaluate the technical, financial, environmental, legal, and other critical aspects of infrastructure development projects that are of interest to potential lenders and investors. Host country project sponsors select the U.S. companies, normally through open competitions. USTDA organizes Annex 17 workshops to help bring developing countries into compliance with lCAO Annex 17. These workshops are designed to give countries assistance before lCAO inspections so that they meet minimum standards and pass inspections. According to USTDA, the workshops suggest ways that relatively poor countries can meet ICAO standards with a low level of technological sophistication. According to USTDA, the workshops focus on enhancing training and improving human resources related to aviation security. According to USTDA, for fiscal year 2005, the agency awarded Chile ($359,000), Haiti ($150,000), Iraq ($243,000), Malaysia ($100,000), Tanzania ($371,000), Ukraine ($625,000), West Africa Regional Training ($353,000) and Worldwide Aviation Security training ($596,000) grant assistance in the aviation security sector. USTDA consults with TSA on an ongoing basis. USTDA used TSA personnel as instructors for the Annex 17 workshops. The Department of Transportation (DOT) manages the Safe Skies for Africa presidential initiative (Safe Skies), which started in 1998. Safe Skies is a technical program that assists participating countries in meeting international aviation safety and security standards. According to DOT officials, Safe Skies is a small program with an annual budget— including operating and administrative costs—between $1 million and $3 million. According to DOT officials, approximately one-fourth of the Safe Skies budget goes toward aviation security. Funding for Safe Skies is provided by the State Department and the U.S. Agency for International Development (USAID). The original Safe Skies participants were selected in 1998 by an interagency committee made up of Department of Defense, Department of Transportation, State Department, and the U.S. Trade and Development Agency. The committee held a series of meetings to consider priority lists created by each agency, cables exchanged with U.S. embassies across sub- Saharan Africa, and responses to questionnaires sent to various states. The committee selected countries that it believed had the highest likelihood of successfully complying with international aviation safety and security standards set by ICAO and requirements set by the Federal Aviation Administration (FAA) and TSA. The committee also considered U.S. trade interests and regional diversity issues. In the end, countries from across sub-Saharan Africa were selected to participate in the program. Since 1998 only two countries have been added to the list of Safe Skies participants. Both Uganda and Djibouti became Safe Skies countries after President Bush announced the East Africa Counterterrorism Initiative in 2003. All Safe Skies countries receive some degree of aid, with priority going to those countries that demonstrate political will. DOT gauges political will based on consultations with embassies and TSA and whether a country implements recommended safety and security practices. The Administration’s priorities are communicated through the State Department. According to DOT, all participants except Zimbabwe have had aviation security, safety, and air navigation surveys of their civil aviation systems performed at their airports by U.S. government subject - matter experts. Since September 11, 2001, the State Department has provided $5 million in additional resources for DOT to provide security equipment to Safe Skies countries. DOT officials stated that they worked with their TSA (formerly FAA security) colleagues to perform site visits to help agency officials determine country-specific security equipment needs for the screening of passengers and baggage. According to DOT, Safe Skies has an East Africa aviation security advisor stationed in Nairobi, Kenya to provide direct advice and technical assistance to Djibouti, Kenya, Tanzania, and Uganda in meeting ICAO standards and to assist these states in addressing potential threats to civil aviation. According to DOT, fiscal year 2005 recipients of Safe Skies assistance were Angola, Cameroon, Cape Verde, Djibouti, Kenya, Mali, Namibia, Tanzania, and Uganda. TSA and DOT responsibilities are laid out in a 2004 TSA-DOT memorandum of agreement. Under this agreement, TSA provides advice, technical assistance, and training through the TSA Enforcement Academy, in addition to providing an aviation security advisor to Safe Skies. These activities are funded by DOT, with funds that were appropriated to USAID and transferred to DOT for the purposes of implementing Safe Skies. TSA also works in partnership with DOT to prioritize recipient countries based on need. The Bureau of International Narcotics and Law Enforcement Affairs (INL) of the Department of State has a program under way aimed at combating alien smuggling and improving border security. The part of the program relating to border security contains elements relating to maritime security and airport security. These efforts are undertaken in cooperation with the Organization of American States’ (OAS) Inter-American Committee against Terrorism (CICTE). The INL-OAS efforts began with maritime security and were broadened to include aviation security in 2003. INL officials worked with CICTE officials to select the appropriate OAS member countries to receive training. As of August 2006, the aviation security effort under way was focused on Caribbean nations, and fiscal year 2006 funding was also intended to provide funding for some Central and South American nations. Roughly $264,000 was spent in 2004, $187,110 in 2005, and $236,610 in 2006 on aviation security. INL funds pay for aviation security training courses, and the courses are taught by TSA officials. These training courses are aimed at helping countries to develop national civil aviation security programs and other essential plans based on the ICAO standards as well as crisis management. INL funds were used to pay for national development workshops for Caribbean countries. These workshops were taught by TSA staff who spent 1 week in each Caribbean country. While in country, TSA representatives reviewed the country’s security program, looked for deficiencies within the security program, and attempted to build a program that would resolve the deficiencies they identified. According to OAS, participants in these workshops identified recommendations to improve aviation security and combat terrorism and submitted the recommendations to their respective governments. The workshops addressed enhancements to the national security program, national legislation, oversight, national security committees, and program approval processes. According to OAS, in 2006, these workshops took place in Antigua and Barbuda, Bahamas, Belize, Dominican Republic, Grenada, Guyana, Jamaica, St. Kitts and Nevis, and St. Vincent and the Grenadines. According to OAS, starting in September 2006, this program began functioning in Central America, where national development workshops were planned to take place in Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and Panama. According to OAS, in addition to the national development workshops, this program also offers a 5-day crisis management workshop for midlevel to senior-level aviation management and other government officials. INL, through CICTE, also funds aviation security courses that are taught by ICAO instructors. According to OAS, the recipient countries of CICTE-sponsored aviation security training for calendar year 2006 were Antigua and Barbuda, Bahamas, Barbados, Belize, Bolivia, Columbia, Costa Rica, Dominican Republic, El Salvador, Grenada, Guatemala, Guyana, Honduras, Jamaica, Nicaragua, Panama, Paraguay, Peru, St. Kitts and Nevis, St. Lucia, St. Vincent and the Grenadines, Trinidad andTobago, and Uruguay. TSA officials are the instructors for the on-site workshops. CICTE established an memorandum of agreement with TSA, and discussed the best approach for helping OAS members develop a long-term international aviation security program. CICTE and TSA decided that in-country, on-the- ground visits would be the best approach, since these allow CICTE and TSA to see which problems are present. According to OAS, during the fourth quarter of 2006, CICTE received grant funding to provide aviation security training courses for the nine countries that will host the 2007 Cricket World Cup. According to OAS, grant funding was used to support two aviation security training courses—the Basic Security Training Course and the Aviation Security Training Course. The Basic Security Training Course is a 7-day course focused on improving aviation security screeners’ ability to detect threat items using X-ray machines, metal detection portals, physical search techniques, and explosive trace detection technologies. According to OAS, the Aviation Security Training Course is a 9-day course that addresses concepts and principles of managing aviation security operations within the unique environment of an international airport. Course content is also based on ICAO standards and recommended practices and focused on the protection of passengers, crew, ground personnel, the general public, the aircraft, and airport facilities. According to OAS, practical exercises are used to reinforce classroom learning. This course provided training to midlevel managers and supervisors who are responsible for aviation security program planning, oversight, and operations. According to OAS, TSA instructors train these officials in identifying vulnerabilities at their airports, developing preventive measures, and allocating resources to handle the flow of passengers while maintaining adequate security. The recipient countries for calendar year 2006 and the first half of 2007 are Antigua and Barbuda, Grenada, Guyana, Jamaica, St. Kitts and Nevis, and St. Lucia. The Department of Justice’s (DOJ) International Criminal Investigative Training Assistance Program (ICITAP) aims to develop law enforcement agencies and systems. Training is only one component of ICITAP’s holistic approach to this mission. ICITAP has an ongoing relationship with the Department of State to offer various types of training. Since 2000, ICITAP facilitated Department of State-initiated aviation security training in Ghana and the Dominican Republic, and conducted an assessment in Benin. The Department of Justice’s involvement can begin when a foreign government makes a request to the U.S. embassy for training to rectify perceived weaknesses in aviation security. The embassy then collaborates with DOJ to put together a proposal for action, which is then sent to the Department of State’s Bureau of International Narcotics and Law Enforcement. INL attempts to obtain a country-specific appropriation for the project, and alerts DOJ as to whether funding is available. According to DOJ, INL sometimes targets certain countries for assistance and then asks ICITAP to prepare proposals and budgets to support training activities and technical assistance to improve law enforcement capacity in the host countries. ICITAP assistance included on-site aviation security needs assessments, with ICITAP serving as facilitator and current and former TSA (previously FAA) employees performing the aviation security needs assessments. The assessment was based on standards laid out in ICAO Annex 17. The assessment attempted to broadly gauge the adequacy of the available security systems and each country’s ability to manage the systems. As of February 2007, the most recent recipients are Benin ($79,500 in 2002), Ghana ($79,500 in 2002), and the Dominican Republic ($32,000 in 2003). In 2003, as a result of information gathered from TSA’s foreign airport assessment report, ICITAP provided drug interdiction training to customs officials in Ghana stationed at the airport. According to DOJ, INL granted $79,500 each to Ghana and Benin for the purpose of providing airport security training. Former and current TSA officials have conducted needs assessments and provided training to foreign officials through ICITAP. In addition to the person named above, Maria Strudwick, Assistant Director; Amy Bernstein; Kristy Brown; Alisha Chugh; Emily Hanawalt; Christopher Jones; Stanley Kostyla; Kyle Lamborn; Thomas Lombardi; Jeremy Manion; and Linda Miller made key contributions to this report. Aviation Security: Federal Efforts to Secure U.S.-Bound Air Cargo Are in the Early Stages and Could Be Strengthened. GAO-07-660. Washington, D.C.: April 30, 2007. Aviation Security: TSA's Change to its Prohibited Items List Has Not Resulted in Any Reported Public Safety Incidents, but the Impact of the Change on Screening Operations is Uncertain. GAO-07-623R. Washington, D.C.: April 25, 2007. Aviation Security: Risk, Experience, and Customer Service Drive Changes to Airline Passenger Screening Procedures, but Evaluation and Documentation of Proposed Changes Could Be Improved. GAO-07-634. Washington, D.C.: April 16, 2007. Aviation Security: TSA’s Staffing Allocation Model Is Useful for Allocating Staff among Airports, but Its Assumptions Should Be Systematically Reassessed. GAO-07-299. Washington, D.C.: Feb. 28, 2007. Aviation Security: Progress Made in Systematic Planning to Guide Key Investment Decisions, but More Work Remains. GAO-07-448T. Washington, D.C.: Feb. 13, 2007. Aviation Security: TSA Oversight of Checked Baggage Screening Procedures Could Be Strengthened. GAO-06-869. Washington, D.C.: Jul. 28, 2006. Aviation Security: TSA Has Strengthened Efforts to Plan for the Optimal Deployment of Checked Baggage Screening Systems but Funding Uncertainties Remain. GAO-06-875T. Washington, D.C.: June 29, 2006. Aviation Security: Management Challenges Remain for the Transportation Security Administration’s Secure Flight Program. GAO-06-864T. Washington, D.C.: June 14, 2006. Aviation Security: Further Study of Safety and Effectiveness and Better Management Controls Needed if Air Carriers Resume Interest in Deploying Less than Lethal Weapons. GAO-06-475. Washington, D.C.: May 26, 2006. Aviation Security: Enhancements Made in Passenger and Checked Baggage Screening, but Challenges Remain. GAO-06-371T. Washington, D.C.: Apr. 4, 2006. Aviation Security: Transportation Security Administration Has Made Progress in Managing a Federal Security Workforce and Ensuring Security at U.S. Airports, but Challenges Remain. GAO-06-597T. Washington, D.C.: Apr. 4, 2006. Aviation Security: Progress Made to Set Up Program Using Private- Sector Airport Screeners, but More Work Remains. GAO-06-166. Washington, D.C.: Mar. 31, 2006. Aviation Security: Significant Management Challenges May Adversely Affect Implementation of the Transportation Security Administration’s Secure Flight Program. GAO-06-374T. Washington, D.C.: Feb. 9, 2006. Aviation Security: Federal Air Marshal Service Could Benefit from Improved Planning and Controls. GAO-06-203. Washington, D.C.: Nov. 28, 2005. Aviation Security: Federal Action Needed to Strengthen Domestic Air Cargo Security. GAO-06-76. Washington, D.C.: Oct. 17, 2005. Transportation Security Administration: More Clarity on the Authority of Federal Security Directors Is Needed. GAO-05-935. Washington, D.C.: Sept. 23, 2005. Aviation Security: Flight and Cabin Crew Member Security Training Strengthened, but Better Planning and Internal Controls Needed. GAO-05-781. Washington, D.C.: Sept. 6, 2005. Aviation Security: Transportation Security Administration Did Not Fully Disclose Uses of Personal Information During Secure Flight Program Testing in Initial Privacy Notes, but Has Recently Taken Steps to More Fully Inform the Public. GAO-05-864R. Washington, D.C.: July 22, 2005. Aviation Security: Better Planning Needed to Optimize Deployment of Checked Baggage Screening Systems. GAO-05-896T. Washington, D.C.: July 13, 2005. Aviation Security: Screener Training and Performance Measurement Strengthened, but More Work Remains. GAO-05-457. Washington, D.C.: May 2, 2005. Aviation Security: Secure Flight Development and Testing Under Way, but Risks Should Be Managed as System Is Further Developed. GAO-05-356. Washington, D.C.: Mar. 28, 2005. Aviation Security: Systematic Planning Needed to Optimize the Deployment of Checked Baggage Screening Systems. GAO-05-365. Washington, D.C.: Mar. 15, 2005. Aviation Security: Measures for Testing the Effect of Using Commercial Data for the Secure Flight Program. GAO-05-324. Washington, D.C.: Feb.23, 2005. Transportation Security: Systematic Planning Needed to Optimize Resources. GAO-05-357T. Washington, D.C.: Feb.15, 2005. Aviation Security: Preliminary Observations on TSA’s Progress to Allow Airports to Use Private Passenger and Baggage Screening Services. GAO-05-126. Washington, D.C.: Nov.19, 2004. General Aviation Security: Increased Federal Oversight Is Needed, but Continued Partnership with the Private Sector Is Critical to Long-Term Success. GAO-05-144. Washington, D.C.: Nov. 10, 2004. Aviation Security: Further Steps Needed to Strengthen the Security of Commercial Airport Perimeters and Access Controls. GAO-04-728. Washington, D.C.: Jun. 4, 2004. Transportation Security Administration: High-Level Attention Needed to Strengthen Acquisition Function. GAO-04-544. Washington, D.C.: May 28, 2004. Aviation Security: Challenges in Using Biometric Technologies. GAO-04-785T. Washington, D.C.: May 19, 2004. Nonproliferation: Further Improvements Needed in U.S. Efforts to Counter Threats from Man-Portable Air Defense Systems. GAO-04-519. Washington, D.C.: May 13, 2004. Aviation Security: Private Screening Contractors Have Little Flexibility to Implement Innovative Approaches. GAO-04-505T. Washington, D.C.: Apr. 22, 2004. Aviation Security: Improvement Still Needed in Federal Aviation Security Efforts. GAO-04-592T. Washington, D.C.: Mar. 30, 2004. Aviation Security: Challenges Delay Implementation of Computer- Assisted Passenger Prescreening System. GAO-04-504T. Washington, D.C.: Mar. 17, 2004. Aviation Security: Factors Could Limit the Effectiveness of the Transportation Security Administration’s Efforts to Secure Aerial Advertising Operations. GAO-04-499R. Washington, D.C.: Mar. 5, 2004. Aviation Security: Computer-Assisted Passenger Prescreening System Faces Significant Implementation Challenges. GAO-04-385. Washington, D.C.: Feb. 13, 2004. Aviation Security: Challenges Exist in Stabilizing and Enhancing Passenger and Baggage Screening Operations. GAO-04-440T. Washington, D.C.: Feb. 12, 2004. The Department of Homeland Security Needs to Fully Adopt a Knowledge-based Approach to Its Counter-MANPADS Development Program. GAO-04-341R. Washington, D.C.: Jan. 30, 2004. Aviation Security: Efforts to Measure Effectiveness and Strengthen Security Programs. GAO-04-285T. Washington, D.C.: Nov. 20, 2003. Aviation Security: Federal Air Marshal Service Is Addressing Challenges of Its Expanded Mission and Workforce, but Additional Actions Needed. GAO-04-242. Washington, D.C.: Nov. 19, 2003. Aviation Security: Efforts to Measure Effectiveness and Address Challenges. GAO-04-232T. Washington, D.C.: Nov. 5, 2003. Airport Passenger Screening: Preliminary Observations on Progress Made and Challenges Remaining. GAO-03-1173. Washington, D.C.: Sept. 24, 2003. Aviation Security: Progress since September 11, 2001, and the Challenges Ahead. GAO-03-1150T. Washington, D.C.: Sept. 9, 2003. Transportation Security: Federal Action Needed to Enhance Security Efforts. GAO-03-1154T. Washington, D.C.: Sept. 9, 2003. Transportation Security: Federal Action Needed to Help Address Security Challenges. GAO-03-843. Washington, D.C.: Jun. 30, 2003. Federal Aviation Administration: Reauthorization Provides Opportunities to Address Key Agency Challenges. GAO-03-653T. Washington, D.C.: Apr. 10, 2003. Transportation Security: Post-September 11th Initiatives and Long- Term Challenges. GAO-03-616T. Washington, D.C.: Apr. 1, 2003. Airport Finance: Past Funding Levels May Not Be Sufficient to Cover Airports’ Planned Capital Development. GAO-03-497T. Washington, D.C.: Feb. 25, 2003. Transportation Security Administration: Actions and Plans to Build a Results-Oriented Culture. GAO-03-190. Washington, D.C.: Jan. 17, 2003. Aviation Safety: Undeclared Air Shipments of Dangerous Goods and DOT’s Enforcement Approach. GAO-03-22. Washington, D.C.: Jan. 10, 2003. Aviation Security: Vulnerabilities and Potential Improvements for the Air Cargo System. GAO-03-344. Washington, D.C.: Dec. 20, 2002. Aviation Security: Registered Traveler Program Policy and Implementation Issues. GAO-03-253. Washington, D.C.: Nov. 22, 2002. Airport Finance: Using Airport Grant Funds for Security Projects Has Affected Some Development Projects. GAO-03-27. Washington, D.C.: Oct. 15, 2002. Commercial Aviation: Financial Condition and Industry Responses Affect Competition. GAO-03-171T. Washington, D.C.: Oct. 2, 2002. Aviation Security: Transportation Security Administration Faces Immediate and Long-Term Challenges. GAO-02-971T. Washington, D.C.: Jul. 25, 2002. Aviation Security: Information Concerning the Arming of Commercial Pilots. GAO-02-822R. Washington, D.C.: Jun. 28, 2002. Aviation Security: Vulnerabilities in, and Alternatives for, Preboard Screening Security Operations. GAO-01-1171T. Washington, D.C.: Sept. 25, 2001. Aviation Security: Weaknesses in Airport Security and Options for Assigning Screening Responsibilities. GAO-01-1165T. Washington, D.C.: Sept. 21, 2001. Homeland Security: A Framework for Addressing the Nation’s Efforts. GAO-01-1158T. Washington, D.C.: Sept. 21, 2001. Aviation Security: Terrorist Acts Demonstrate Urgent Need to Improve Security at the Nation’s Airports. GAO-01-1162T. Washington, D.C.: Sept. 20, 2001. Aviation Security: Terrorist Acts Illustrate Severe Weaknesses in Aviation Security. GAO-01-1166T. Washington, D.C.: Sept. 20, 2001.
The Transportation Security Administration's (TSA) efforts to evaluate the security of foreign airports and air carriers that service the United States are of great importance, particularly considering that flights bound for the United States from foreign countries continue to be targets of coordinated terrorist activity, as demonstrated by the alleged August 2006 liquid explosives terrorist plot. For this review, GAO evaluated the results of foreign airport and air carrier evaluations; actions taken and assistance provided by TSA when security deficiencies were identified; TSA's oversight of its foreign airport and air carrier evaluation programs; and TSA's efforts to address challenges in conducting foreign airport and air carrier evaluations. To conduct this work, GAO reviewed foreign airport and air carrier evaluation results and interviewed TSA officials, foreign aviation security officials, and air carrier representatives. Of the 128 foreign airports that TSA assessed during fiscal year 2005, TSA found that about 36 percent complied with all applicable security standards, while about 64 percent did not comply with at least one standard. The security deficiencies identified by TSA at two foreign airports were such that the Secretary of Homeland Security notified the public that the overall security at these airports was ineffective. Of the 529 overseas air carrier inspections conducted during fiscal year 2005, for about 71 percent, TSA did not identify any security violations, and for about 29 percent, TSA identified at least one security violation. TSA took enforcement action--warning letters, correction letters, or monetary fines--for about 18 percent of the air carrier security violations. TSA addressed most of the remaining 82 percent of security violations through on-site consultation. TSA assisted foreign officials and air carrier representatives in addressing identified deficiencies through on-site consultation, recommendations for security improvements, and referrals for training and technical assistance. However, TSA's oversight of the foreign airport assessment and air carrier inspection programs could be strengthened. For example, TSA did not have adequate controls in place to track whether scheduled assessments and inspections were actually conducted, deferred, or canceled. TSA also did not always document foreign officials' progress in addressing security deficiencies identified by TSA. Further, TSA did not always track what enforcement actions were taken against air carriers with identified security deficiencies. TSA also did not have outcome-based performance measures to assess the impact of its assessment and inspection programs on the security of U.S.-bound flights. Without such controls, TSA may not have reasonable assurance that the foreign airport assessment and air carrier inspection programs are operating as intended. TSA is taking action to address challenges that have limited its ability to conduct foreign airport assessments and air carrier inspections, including a lack of available inspectors, concerns regarding the resource burden placed on host governments as a result of frequent airport visits by TSA and others, and host government concerns regarding sovereignty. In October 2006, TSA began implementing a risk-based approach to scheduling foreign airport assessments, which should allow TSA to focus its limited inspector resources on higher-risk airports. TSA is also exploring opportunities to conduct joint airport assessments with the European Commission and use the results of airport assessments conducted by the European Commission to potentially adjust the frequency of TSA airport visits.
As provided by the Career Compensation Act of 1949, as amended, service members who become physically unfit to perform military duties may receive military disability compensation under certain conditions. Compensation for disabilities can be in the form of monthly disability retirement benefits or a lump sum disability severance payment, depending on the disability rating and years of creditable service. To qualify for monthly disability retirement benefits, a service member with a permanent impairment that renders him or her unfit for duty must have (1) at least 20 years of creditable service or (2) a disability rating of at least 30 percent. Service members with less than 20 years of creditable service and a disability rating less than 30 percent receive a lump sum severance disability payment. Service members with service connected disabilities may also be eligible for VA disability compensation. Until recently, this military benefit was offset by any VA compensation received. However, the fiscal year 2004 National Defense Authorization Act now allows some military retirees to concurrently receive VA and military benefits. Generally, military disability retirement pay is taxable. Exceptions are (1) if the disability pay is for combat-related injuries or (2) if the service member was in the military, or so obligated, on September 24, 1975. Each of the military services administers its own disability evaluation process. According to DOD regulations, the process should include a medical evaluation board (MEB), a physical evaluation board (PEB), an appellate review process, and a final disposition. Each service member should be assigned a Physical Evaluation Board Liaison Officer (PEBLO), a counselor to help the service member navigate the system and prepare documents for the PEB. As shown in figure 1, there are a number of steps in the disability evaluation process and several factors that play a role in the decisions that are made at each step. There are four possible outcomes in the disability evaluation system. A service member can be found fit for duty; separated from the service without benefits—service members whose disabilities were incurred while not on duty or as a result of intentional misconduct are discharged from the service without disability benefits; separated from the service with lump sum disability severance pay; or retired from the service with permanent monthly disability benefits or placed on the temporary disability retired list (TDRL). The disability evaluation process begins at a military treatment facility (MTF), when a physician identifies a condition that may interfere with a service member’s ability to perform his or her duties. The physician prepares a narrative summary detailing the injury or condition. DOD policy establishes the date of dictation of the narrative summary as the beginning of the disability evaluation process. This specific type of medical evaluation is for the purpose of determining if the service member meets the military’s retention standards, according to each service’s regulations. This process is often referred to as a medical evaluation board (MEB). Service members who meet retention standards are returned to duty, and those who do not are referred to the physical evaluation board (PEB). The PEB is responsible for determining whether service members have lost the ability to perform their assigned military duties due to injury or illness, which is referred to as being “unfit for duty”. If the member is found unfit, the PEB must then determine whether the condition was incurred or permanently aggravated as a result of military service. While the composition of the PEB varies by service, it is typically composed of one or more physicians and one or more line officers. Each of the services conducts this process for its service members. The Army has three PEBs located at Fort Sam Houston, Texas; Walter Reed Army Medical Center in Washington, D.C.; and Fort Lewis, Washington. The Navy has one located at the Washington Navy Yard in Washington, D.C. The Air Force has one located in San Antonio, Texas. The first step in the PEB process is the informal PEB—an administrative review of the case file without the presence of the service member. The PEB makes the following findings and recommendations regarding possible entitlement for disability benefits: Fitness for duty—The PEB determines whether or not the service member “is unable to reasonably perform the duties of his or her office, grade, rank, or rating,” taking into consideration the requirements of a member’s current specialty. Fitness determinations are made on each medical condition presented. Only those medical conditions which result in the finding of “unfit for continued military service” will potentially be compensated. Service members found fit must return to duty. Compensability—The PEB determines if the service member’s injuries or conditions are compensable, considering whether they existed prior to service (referred to as having a pre-existing condition) and whether they were incurred or permanently aggravated in the line of duty. Service members found unfit with noncompensable conditions are separated without disability benefits. Disability rating—When the PEB finds the service members unfit and their disabilities are compensable, it applies the medical criteria defined in the Veterans Administration Schedule for Rating Disabilities (VASRD) to assign a disability rating to each compensable condition. The PEB then determines (or calculates) the service member’s overall degree of service connected disability. Disability ratings range from 0 (least severe) to 100 percent (most severe) in increments of 10 percent. Depending on the overall disability rating and number of years of active duty or equivalent service, the service member found unfit with compensable conditions is entitled to either monthly disability retirement benefits or lump sum disability severance pay. In disability retirement cases, the PEB considers the stability of the condition. Unstable conditions are those for which the severity might change resulting in higher or lower disability ratings. Service members with unstable conditions are placed on TDRL for periodic PEB reevaluation at least every 18 months. While on TDRL, members receive monthly retirement benefits. When members on TDRL are determined to be fit for duty, they may choose to return to duty or leave the military at that time. Members who continue to be unfit for duty after 5 years on TDRL are separated from the military with monthly retirement benefits, discharged with severance pay, or discharged without benefits, depending on their condition and years of service. Service members have the opportunity to review the informal PEB’s findings and may request a formal hearing with the PEB; however, only those found unfit are guaranteed a formal hearing. The formal PEB conducts a de novo review of referred cases and renders its own decisions based upon the evidence. At the formal PEB hearing, service members can appear before the board, put forth evidence, introduce and question witnesses, and have legal counsel help prepare their cases and represent them. The military will provide military counsel or service members may retain their own representative. If service members disagree with the formal PEB’s findings and recommendations, they can, under certain conditions, appeal to the reviewing authority of the PEB. Once the service member either agrees with the PEB’s findings and recommendations or exhausts all available appeals, the reviewing authority issues a final disability determination concerning fitness for duty, disability rating, and entitlement to benefits. In 2005, over 23,000 U.S. service members with physical injuries or other conditions went through the military disability evaluation system, according to DOD. In total, the Army, Navy, and Air Force report evaluating over 90,000 PEB cases during the fiscal years 2001 to 2005. The Army represents the largest share of disability cases, with Army reserve component members representing approximately 32 percent of all Army cases in 2005 (see table 1). PEB disability caseloads for all services have increased over time from about 15,000 in fiscal year 2002 to about 23,000 in fiscal year 2005. In fiscal year 2004, the military services spent over $1 billion in disability retirement benefits for over 90,000 service members. See table 2. This table does not include expenditures for lump sum disability payments which DOD was unable to provide. The Secretary of Defense oversees the military disability evaluation system through the Under Secretary of Defense for Personnel and Readiness. The Surgeons General for each service are responsible for overseeing their service’s MTFs, including the MEBs conducted at each facility. The Deputy Under Secretary of Defense for Military Personnel Policy has oversight of the PEBs, and also oversees the Disability Advisory Council. The council is composed of officials from DOD’s offices of Military Personnel Policy, Health Affairs, and Reserve Affairs, the services’ disability agencies; and the Department of Veterans Affairs. See fig. 2. The policies and guidance for disability determinations for all service members are somewhat different among the Army, Navy, and Air Force. DOD has explicitly given the services the responsibility to set up their own processes for some aspects of the disability system and has given the services much room for interpretation. Each service has implemented its system somewhat differently. For example, the composition of decision making bodies differs across the services. Additionally, the laws that govern military disability and the policies that DOD and the services have developed to implement these laws have led to reserve members having different experiences with the disability system than active duty members. Some of these experiences result from the part-time nature of reserve service while others are the consequence of policies and laws specific to reservists. DOD regulations establish some parameters for the disability system and provide guidelines to the services, and the services each have their own regulations in accordance with these. Specifically, the aspects of the system that differ among the services include: characteristics of the medical evaluation board (MEB) and physical evaluation board (PEB), the use of counselors to help service members navigate the system, and procedures to make line of duty determinations. Appendix III provides a compilation of these and other differences. DOD regulations require that each service set up MEBs to conduct medical evaluations to determine if the service member meets retention standards according to each service’s regulations. The services carry out MEB procedures differently. For example, the Air Force MEB convenes an actual board of physicians who meet regularly and vote to decide whether a service member meets retention standards. In the Army and Navy, in contrast, the MEB is an informal procedure. A service member’s case file is passed among the board’s members, who separately evaluate it. In all of the services, the medical commander or his designee may sign off on the final decision. The services also differ in the qualifications and requirements for MEB board membership. The Army and Navy require that at least two physicians serve on an MEB, while the Air Force requires three. In accordance with DOD regulations, the military services have set up PEBs to evaluate whether service members are fit for duty. DOD regulations provide no guidance concerning how much time a service member has to decide whether to accept the disability decision of his or her informal PEB. Army provides service members 10 calendar days; Navy provides 15 calendar days; and Air Force provides 3 duty days, according to their regulations. Additionally, DOD regulations provide that service members found unfit for duty by an informal PEB are guaranteed the right to appeal to a formal PEB. However, service members found fit are not guaranteed the right to appear before a formal PEB. While DOD regulations state that a service member has the right to appeal the decision of a formal PEB, they do not state what this appeal process should look like. The services differ in how many appeal opportunities they offer service members after the formal PEB. For example, the Navy and Air Force offer two opportunities for appeal after the formal PEB. The Army also has two opportunities for appeal. However, it also has the Army Physical Disability Appeal Board, which provides appeal for only certain cases, for example, if the Army Physical Disability Agency revises the finding of the PEB during a quality or mandatory review and the soldier disagrees with the change (see table 3). Further, the services also differ on whether they permit the same members to sit on the informal and formal PEB of the same case. The Army allows PEB members to do this while the Air Force only allows this under certain circumstances. The Navy has no written policy on the matter, although one official from the Navy PEB indicated that the members of the two boards were often the same for a case. The point at which PEBLOs become involved in the disability evaluation system and the training PEBLOs receive differ between the services. DOD regulations require that each service provide members counseling during the disability evaluation process and outline the responsibilities of these counselors. For example, they are expected to discuss with service members their rights, the effects of MEB and PEB decisions, and available benefits. Each service has created PEBLOs in accordance with these rules, but the services have placed the PEBLOs under different commands. In the Army and Air Force, PEBLOs are the responsibility of the medical command. In the Navy, in contrast, the PEBLO responsibility is shared by the PEB and MTF. Further, the services involve PEBLOs at different points in the disability process. In the Army and Air Force, PEBLOs begin counseling the service member at the MEB level of the disability process. However, while Navy, officials told us that PEBLOs provide counseling at the MEB level, some PEBLOs we interviewed told us that they begin counseling members after the informal PEB has issued its decision. At some MTFs, case managers provide counseling for service members going through the disability evaluation process. The services also differ in their training of PEBLOs. The Army holds an annual conference for PEBLOs and provides on-the-job training. The Navy relies primarily on on-the-job training and also offers quarterly and annual training. The Air Force also relies heavily on on-the-job training and, until recently, held regular training for PEBLOs. As required by law, a service member may receive disability compensation for an injury or illness that was incurred or permanently aggravated while in the line of duty. Generally, the military services document that an injury occurred in the line of duty by filling out a form or noting it in the service member’s health record. Typically, a service member’s commanding officer is responsible for this action, according to the service’s policies. Unlike the Army and Navy, the Air Force always requires a line of duty determination for reservists. DOD regulations state that an injury is presumed to have been in the line of duty when it clearly resulted from enemy or terrorist attack, regardless of whether the member is a reserve or active duty member. However, if the injury may have resulted from misconduct or willful negligence, DOD requires the military services to investigate and determine whether the injury did, in fact, occur in the line of duty. The line of duty determination is a complicated process involving a number of people, such as the examining medical official and higher commands. DOD gives the services responsibility for creating the procedures for conducting line of duty determinations, and there are some technical differences in the processes among the services. For example, the services have different rules regarding how long this process should take. The Army and the Air Force place time frames on the process, while the Navy does not. The laws that govern the military disability system and the policies and guidance that DOD and the services have developed to implement the laws can result in different experiences with the disability system for reservists. Some of these differences are due to the part-time nature of reserve service, while others result from laws and policies specific to reservists. Because they are not on duty at all times, reservists take longer to accrue the 20 years of service that may be needed to earn the monthly disability retirement benefit when the disability rating is less than 30 percent. For example, an active duty service member who enlisted in the Army in 1985 and stayed on continuous active duty would have 20 years toward disability retirement by 2005. An Army reservist who enlisted at the same time, met his training obligations, and had been activated for 1 year would have roughly 5 years and 9 months toward disability retirement by 2005, according to the formula the Army uses to determine years of service toward disability retirement benefits. All three services use the same formula when calculating the 20 years of service requirement for disability retirement benefits. The part-time status of reservists also makes it more difficult for reservists with preexisting conditions to be covered by the 8-year rule and therefore eligible for compensation. By law, service members with at least 8 years of active duty service are entitled to compensation even if their conditions existed before the beginning of their military service or were not service aggravated. This entitlement only applies to reservists when they are on ordered active duty of more than 30 days at the time of PEB adjudication. For reservists, accruing the 8 years necessary for a condition to be covered by this rule can be more difficult than for active duty service members. For example, an active duty service member who enlisted in the Army in 1997 and stayed on continuous active duty would have 8 years toward disability retirement by 2005. A reservist who enlisted at the same time, met his training obligations, and has been activated for 2 years would have roughly 1 year and 3 months of service, according to the Army’s 8-year rule formula and would not be eligible for compensation for a preexisting condition. Further, the services differ slightly in how they calculate the 8 years for reservists. The Army and Navy calculate the 8 years differently from the 20 year requirement, but the Air Force uses the same formula for both. The Army and Navy count only active duty time, while the Air Force also counts time spent in other activities, such as continuing education. Officials reported that commanders and others responsible for completing line of duty determinations were often uncertain as to when line of duty determinations were necessary for reservists and active duty members. Moreover, these officials noted that in some cases, the necessary line of duty determinations were not made, resulting in delays for service members. For example, Air Force officials we spoke with had different impressions as to whether line of duty determinations were always required for reservists, even though Air Force regulations state they are. Officials from the Army and Army National Guard similarly offered different perspectives on the need for line of duty determinations for reservists. In the Army, deployed active duty soldiers return to their unit in a back up capacity when they are injured or ill. However, mobilized injured or ill Army reservists have no similar unit to return to. Consequently, they may be removed from their mobilization orders and retained on active duty in “medical holdover status” and assigned to a unit, such as a medical retention processing unit. While in medical holdover status, reservists may live on base, at a military treatment facility, at home or other locations. After their mobilization orders expire, they can elect to continue on active duty through a program such as medical retention processing, which allows them to continue receiving pay and benefits. Between 2003 and 2005 the Army reports that about 26,000 reservists entered medical holdover status (see appendix II). Unlike most injured active duty soldiers, reservists in medical holdover generally live farther from their families than active duty members because the units at military medical facilities are often far from where their families live. In certain cases reservists in medical holdover may receive treatment and recuperate at home. The Army’s Community-Based Health Care Organizations (CBHCOs) provide medical and case management for these reservists living at home as they receive medical care in their communities. As of December 2005, about 35 percent of the reservists in medical holdover were being cared for in the CBHCO program. In order to be assigned to the CBHCO program, reservists must meet a number of criteria. For example, reservists must live in communities where they can get appropriate care, and they must also be reliable in keeping medical appointments. DOD has policies and guidance to promote consistent and timely disability decisions, but is not monitoring whether the services are compliant. Neither DOD nor the services systematically determine the consistency of decision making, which would be a key component of quality assurance. With regard to timeliness, DOD has issued goals for processing service members’ cases but is not collecting available information from the services, and military officials have expressed concerns that the goals may not be realistic. Finally, DOD is not exercising any oversight over training for staff in the disability system, despite being required to do so. To encourage consistent decision making, DOD policies require that service members’ case files undergo multiple reviews and federal law requires that disability ratings be based on a common schedule. During both the MEB and PEB stages of the disability process, a service member’s case must be reviewed and approved by several officials with different roles. When rating the severity of a service member’s impairment, all services are required to use a common schedule, VA’s Schedule for Rating Disabilities (VASRD), in accordance with federal law. The VASRD is a descriptive list of medical conditions along with associated disability ratings. For example, if a service member has x-ray evidence of degenerative arthritis affecting two or more joints, “with occasional incapacitating exacerbations,” he or she should receive a rating of 20 percent according to the VASRD. DOD also convenes the Disability Advisory Council, which DOD officials told us is the primary oversight body of the disability system. The disability council is composed of key officials from the three disability agencies of the services, the VA, and relevant DOD officials from the health affairs, reserve affairs, and personnel departments. The council’s mission is to monitor the administration of the disability system and, according to DOD officials, the council serves as a forum to discuss issues such as rules changes and increasing coordination among the services. Currently, the disability council is facilitating a review and revision of all DOD regulations pertaining to the disability system. Military officials view the council as a group that aims to meet quarterly to discuss issues raised by the services. By having these meetings, DOD hopes to bring all of the services “on the same page” when it comes to the disability system. However, military officials reported that the council has not met quarterly in the past year and generally does not produce formal reports for the DOD chain of command. Furthermore, the disability council is staffed by one person at DOD who has additional responsibilities. Military officials also regard the appeals process as helping to ensure the consistency of disability evaluation decision making. However, not all service members appeal. In addition, during the appeals process additional evidence may be presented that may result in a different outcome for the same case. Furthermore, the appeals process is designed to determine whether the correct decision was made, rather than whether consistent decisions were made across comparable cases. Despite this policy guidance and the presence of the disability council, both DOD and the three services lack quality assurance mechanisms to ensure that decisions are consistent. Given that one of the primary goals of the disability system is that disability evaluations take place in a consistent manner, collecting and analyzing the service members’ final disability determinations are critical for ensuring that decisions are consistent. DOD regulations recognize this and require that the agency establish necessary reporting requirements to monitor and assess the performance of the disability system and compliance with relevant DOD regulations. Yet, DOD does not collect information from the services on the final disability determinations and personal characteristics of service members going through the disability system. In addition, DOD has not established quality parameters for the services to follow to evaluate the consistency of decision making. As a result, the services generally lack a robust quality assurance process. In our past work on federal disability programs, we have recommended that quality assurance have two components: (1) the use of multivariate regression analysis examining disability decisions along with controlling factors to determine whether the decisions are consistent and (2) an in depth independent review of a statistically valid group of case files to determine what factors may contribute to inconsistencies. However, the services were unable to provide any evidence that they are conducting statistical reviews – such as multivariate regression analysis – on their data to determine the consistency of decision making for service members with similar characteristics. Furthermore, while we found that the Army is conducting independent reviews of 25 to 30 percent of its PEB cases, the Navy and Air Force conduct these reviews only when a service member appeals the PEB’s decision. Additionally, these reviews reflect how a single case’s medical evidence supports the dispositions made (accuracy) rather than the degree to which decisions in cases, in general, with similar impairments and characteristics compare (consistency). Without such an analysis the services are unable to assure that adjudicators are making consistent decisions in reservist and active duty cases with similar characteristics. Officials from the services said that it was very difficult to examine outcomes for consistency because each disability decision is unique and there are a multitude of factors considered when rendering a disability decision, some of which could not be captured in a database. For example, individuals’ pain tolerance varies, along with their motivation to adhere to treatment programs. Nonetheless, other federal disability programs face the same challenges, have acknowledged the importance of determining consistency of decision making, and have taken some initial steps to develop quality assurance systems. For example, the VA selects a random sample of files for independent review using a standard methodology and compiles the results of these reviews. DOD regulations set forth timeliness goals for the two major processes of the disability system. According to DOD, the first stage of the process— the MEB—should normally be completed in 30 days or less. The second stage of the process—the PEB—should normally take 40 days or less. Despite establishing these timeliness goals for the services, DOD is not ensuring compliance with them. DOD does not regularly collect available timeliness data from the services, a necessary first step for determining compliance. The services generally are using their databases to track the timeliness of decisions, but military officials cited confusion regarding the start date for the process. Both the Army and Navy are tracking processing times for both the MEB and PEB using their databases. The Air Force lacks a centralized database to track its MEB cases and therefore can only track PEB timeliness. However, we found that the usefulness of these timeliness data may be undermined by confusion among military officials and data entry staff regarding the starting dates for the disability process. We compared original Army PEB case files to Army electronic data from both its MEB and PEB databases, and found that the date a physician dictates a narrative summary, the beginning of the disability process and a critical data point for timeliness calculations, was frequently entered incorrectly into the Army’s databases. When we asked about these errors, Army officials said that increased training of data entry staff would help with these problems. Navy officials also said that there was some confusion about how to record starting dates for cases when additional medical information was needed to make a disability decision for a service member. Data reported by the services on the timeliness of cases generally show that the services are not meeting DOD timeliness goals (see appendix II). Military officials said that these results stemmed in part from the unrealistic nature of the goals themselves. Navy officials told us that they do not consider the 30-day goal as a performance standard for MEB processing to be held accountable for. They said that the 30-day goal is also unrealistic, especially in certain cases when there were addendums to the narrative summary. Army officials also said that it was unrealistic for all MEB cases to be processed in 30 days because certain cases take longer. For example, cases when a line of duty determination is needed or when certain medical tests are required to diagnose some orthopedic or psychiatric conditions. While DOD regulations require that the agency develop and maintain training for key participants in the disability system, DOD officials told us that they had given this responsibility to the services. The Assistant Secretary of Defense for Health Affairs is given explicit instructions to develop and maintain a training program for MEB and PEB staff, but officials from the Office of Health Affairs indicated they were unaware that they had the responsibility to develop a training program. In addition, despite high turnover among military disability evaluation staff, the services do not have a system to ensure that all staff are properly trained. This turnover stems, in part, from the military requirement that personnel rotate to different positions in order to be promoted. Depending on the positions involved, military officials told us that some staff remain in their positions from 1 to 6 years, with most remaining about 3 years. This turnover and the resulting loss of institutional knowledge require that the services systematically track who has been properly trained. However, all of the services lack data systems that would allow them to do so, an issue that was highlighted in a previous report by the RAND Corporation. Our analysis of Army disability data from calendar years 2001 to 2005 indicated that after controlling for many of the differences we found between reservists and active duty soldiers, Army reservists received similar disability ratings to their active duty counterparts. We also found that reservists may be less likely to receive military disability benefits. Data on years of service and preexisting conditions were not available for this analysis, however, factors that influence disability benefit decisions. Finally, we were unable to compare processing times for reserve and active duty disability cases because we found that Army data on processing times were not reliable. However, based on these data, some Army officials conclude that reservists’ cases often take longer to process through the disability evaluation system than the cases of active duty soldiers. From 2001 through 2005, the characteristics of Army reservists and active duty soldiers in the disability evaluation system differed in a number of ways. Specifically, reservists tended to have more impairments than active duty soldiers; they were more likely than active duty soldiers to have three or four impairments. Reservists also experienced higher rates of impairments affecting the cardiovascular and endocrine systems, while active duty soldiers experienced a higher rate of impairments affecting the musculoskeletal system. Reservists were more often classified in higher pay grades and more often worked as functional support and administration, crafts workers, and service and supply handlers. See appendix IV. Active component soldiers worked more often as infantry and gun crews; electronic equipment repairers; and communications and intelligence specialists. In addition, compared to the number of active duty disability cases from 2001 through 2005, which remained relatively constant, the proportion of reservists going through the PEB process rose dramatically through 2004. See figure 3. Finally, the demographic characteristics of Army reservists and active duty soldiers in the disability evaluation system also differed. Eighty percent of reservists were male, compared to 76 percent of active duty soldiers, while, on average, reservists were 11 years older than active duty soldiers. See figure 4. Before controlling for factors that could account for differences in the outcomes of the Army disability evaluation system for reserve and active duty soldiers, our analysis of Army data indicates that, from 2001 through 2005, reservists were assigned slightly higher disability ratings, but received benefits less often than active duty soldiers. See appendix V. When we controlled for many of the characteristics of reserve and active duty soldiers that could account for their difference in ratings, we found that, among soldiers who received ratings, the ratings assigned to Army reservists were comparable to those assigned to their active duty counterparts. When we controlled for a more limited number of factors, Army reservists who were determined to be unfit for duty appeared less likely to receive benefits (either monthly disability payments or severance pay). See appendix I. This analysis of benefit outcomes for Army reserve and active duty disability cases could not account for the influence that preexisting conditions and years of service can have on disability decisions. These factors are key in determining whether an injured or ill service member qualify for disability benefits. Because we could not test the effect of these factors empirically, we cannot rule out the possibility that one or the other may account for the differences we found. While, according to the Army’s own statistics, the PEB process can take longer for reservists than active duty soldiers, we found the Army data used to calculate processing times not of sufficient quality to warrant its use in our analysis. Specifically, the dates in Army’s electronic database often did not correspond with the dates recorded in paper files. See appendix I. Nonetheless, the statistics the Army provided indicate that disability cases reviewed between fiscal years 2001 and 2005 took consistently longer than those of active duty soldiers. Over half (54 percent) of reserve soldiers took longer than 90 days while over one-third (35 percent) of active duty soldiers exceed this threshold. See appendix II for more detail. There are several possible explanations for the differences in processing times between reservists and active duty members, according to the Army. For example, the Army officials reported that MEBs often must request medical records from private medical practitioners for reservists’ cases, which can involve considerable delays. In addition, the personnel documents for reservists are stored in facilities around the U.S., and therefore they may take longer to obtain than records for centrally located active duty soldiers. Due to the lack of data on these issues as well as the problems we encountered with the data provided by the Army we were not able to measure the differences or empirically test possible explanations for differences the Army reported in the timeliness of disability case processing for Army reservists and active duty soldiers. The military disability system’s outcomes can greatly impact the future of service members, including reservists, injured in service to their country. Given the significance of these decisions as well as the latitude that services have to implement the system, it is important that DOD exercise proper oversight to make sure the system meets the needs of service members today and in the future. However, DOD is not adequately monitoring the outcomes for active duty and reservist cases in the disability evaluation system. DOD and the services do not have complete and reliable data for all aspects of the disability system. Further, neither DOD nor the services are systematically evaluating consistency and timeliness of decision making in the system. Military officials recognize that in many cases, service members’ cases are not determined within timeliness goals and have suggested that the goals may not be appropriate in many cases. In addition, it may take longer for reservist cases to go through the system. If a goal does not reflect appropriate processing times, it may not be useful as a program management tool. Furthermore, both consistency and timeliness of decisions depend on the adequate training and experience of all participants in the disability system. Yet we found that DOD had little assurance that staff at all levels are properly trained. To ensure that all service members—both active duty and reserves— receive consistent and timely treatment within the disability evaluation process, we recommend that the Secretary of Defense take the following five actions require the Army, Navy, and Air Force to take action to ensure that data needed to assess consistency and timeliness of military disability rating and benefit decisions are reliable; require these services to track and regularly report these data— including comparisons of processing times, ratings and benefit decisions for reservists and active duty members—to the Under Secretary of Personnel and Readiness and the Surgeons General; determine, based on these reports, if ratings and benefit decisions are consistent and timely across the services and between reservists and active duty members and institute improvements to address any deficiencies that might be found; evaluate the appropriateness of current timeliness goals for the disability process and make any necessary changes; and assess the adequacy of training for MEB and PEB disability evaluation staff. We provided a draft of this report to the Department of Defense for its review. DOD agreed with our recommendations, indicating the Department will implement all of them and listing a number of steps it will take to do so. DOD also provided technical comments, which we incorporated into the report as appropriate. We are sending copies of this report to the Secretary of Defense, relevant congressional committees, and others who are interested. Copies will also be made available to others upon request. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix VII. The objectives of our report were to determine: (1) how current DOD policies and guidance for disability determinations compare for the Army, Navy, and Air Force, and what policies are specific to reserve component members of the military; (2) what oversight and quality control mechanisms are in place at DOD and these three services of the military to ensure consistent and timely disability rating and benefit decisions for active and reserve component members, and (3) how disability rating and benefit decisions, and processing times compare for active and reserve component members of the Army, the largest branch of the service, and what factors might explain any differences. To address objectives 1 and 2, we reviewed relevant legislation, policy guidance, and literature; interviewed officials from DOD, Army, Navy, Air Force, Reserves, and National Guard; and visited Lackland and Randolph Air Force Bases, Fort Sam Houston and Walter Reed Army Medical Center; Washington Navy Yard and Bethesda Naval Hospital; and interviewed relevant officials. In addition, we interviewed officials from military treatment facilities. To determine if outcomes for active duty and reserve service members’ disability cases were statistically consistent, we analyzed data provided by the physical evaluation board (PEB) of the Army. We also obtained summary information on total caseloads and processing times from the services and from the Department of Defense. Based on our assessment of the quality of the Army’s data, we concluded that data on disability determinations and ratings made by the Army’s PEB were sufficiently reliable for our analysis. On the other hand, the Army’s data on processing times were not reliable for our analysis. We did not test the reliability of statistical data provided by DOD and the services. This appendix is organized into two sections: Section 1 describes the analyses related to our tests of data quality and reliability. Section 2 describes the empirical analyses that were used to determine if outcomes for active duty and reserve disability cases were statistically consistent. To ensure that the Army data were sufficiently reliable for our analyses, we conducted detailed data reliability assessments of the data sets that we used. We restricted these assessments, however, to the specific variables that were pertinent to our analyses. We found that all of the data sets used in this report were sufficiently reliable for use in our analyses. To allow us to analyze the outcomes of the disability evaluation process and determine whether decisions were made in a timely fashion, we requested that the Army share data from both the Medical Evaluation Board and the Physical Evaluation Board for our review. The Army provided extracts from both the Medical Evaluation Board Internal Tracking Tool (MEBITT), used by the Medical Evaluation Board and the Physical Disability Computer Assisted Processing System (PDCAPS), used by the PEB. During interviews with the database managers responsible for MEBITT and PDCAPS, we learned that the Army has few internal controls to ensure that the data are complete and accurate. Consequently, we conducted a trace-to-file process to determine whether the data in the electronic systems were an accurate reflection of what was recorded in the paper files. We requested that the Army provide us with the paper files for a sample of 130 cases that completed the Army’s disability evaluation process between 2000 and August, 2005. Army officials provided 93 paper files for our review. The remaining files were archived or were not found. We checked the data in files provided against the electronic records in MEBITT and PDCAPS. We determined that the MEBITT data were not sufficiently reliable for our use. We also determined that in the PDCAPS, there was a high degree of accuracy in the data fields related to: rank, component (active duty versus reserve component), date of entry into military service, primary military occupational specialty, disposition of disability case, percentage rating for disability, location of PEB, and illness/diagnosis codes. These fields were deemed reliable for use in our report. However, this review also revealed that the data in the date fields, such as the narrative summary dates and the final decision dates, were often inaccurate and were therefore determined to be of insufficient quality for use in our report. To determine if outcomes for active duty and reserve disability cases were statistically consistent, we conducted extensive statistical analyses including cross tabulations and econometric modeling. This was important because active and reserve component soldiers being evaluated differ greatly in demographic characteristics and in administrative characteristics, such as pay grade and occupational specialty. Recognizing the potential of these characteristics to influence final outcomes and disability ratings, we developed econometric models to assess whether the observed differences between active and reserve component soldiers persist after controlling for these factors. We began with a series of bivariate cross tabulations and then expanded these cross-classifications and examined three-way and four-way tables. These allowed us to compare large groups of active and reserve soldiers, as well as to compare soldiers in specific sets of categories—such as active and reserve soldiers of different grades being evaluated at different PEBs. To control for additional factors, we supplemented the cross- tabulations with ordinary least squares (OLS) and multivariate logistic regressions. Our analyses considered both the size and significance of the relationships of interest, using means, percentages, and odds and odds ratios to assess magnitude, and f-tests, chi-square tests and Wald statistics to assess the significance of the differences. The analyses are limited due to our inability to control for several important factors in the disability evaluation process. For example, no reliable electronic data existed to indicate whether an injury existed prior to service or was incurred outside of the line of duty, both primary reasons for separating a soldier without benefits. Similarly, Army officials told us that data on years of service for reservists in the electronic data the Army provided were unreliable. Additionally, soldiers declared fit or separated without benefits do not receive percentage disability ratings, and the Army reports no impairment codes for soldiers declared fit. As such, we could not determine whether active and reserve component soldiers were similarly likely to be declared fit controlling for impairment or percentage rating. Given these difficulties, we restricted our multivariate analyses to soldiers rated unfit. To assess factors contributing to the final rating among those members declared unfit and receiving a percentage rating (that is, excluding those separated without benefits), we ran a series of multivariate models. Army data systems report up to four impairments. Their final percentage disability rating is determined by a composite of ratings for individual impairments, the system(s) affected and how the specific impairment relates to the soldier’s ability to perform his or her duties. Regression analysis allows us to assess whether the observed differences between reserve and active soldiers’ final ratings persist controlling for factors that enter the decision process, such as military occupational specialty and system of impairment, as well as other factors such as demographic differences between the reserve component and active duty soldiers. We began by estimating a “gross effects” (or unadjusted) model, which considers the gross difference in mean disability ratings between active and reserve component soldiers ignoring other factors. The model confirms descriptive statistics showing that reserve component members’ ratings average approximately 4 points higher than those of active component members. We next estimated a series of alternative “net effects” (or adjusted) models to account for other factors that influence the decision process; these models estimate the impact of being a reservist on rating “net” of other factors. Our first model included number of reported impairments, physical system affected and occupational specialty; a second model added year of decision, age, race, sex, pay grade, and PEB to control for forces that may influence the decision process unofficially and certain demographic differences between components. Additionally, we ran a variety of alternative specifications to ensure the stability and robustness of the results; this included, for example, a model testing the interaction between system affected and occupational specialty and a model to account for the clustering (and potential “nonindependence”) of cases within each PEB. Table 4 presents the coefficient representing the relationship between being a reservist on final disability ratings in models that control for a limited set of controls both relevant and external to the formal decision process. What appears to be a small difference in ratings between reserve and active component members diminishes controlling for other factors. Overall, results of our OLS regression analyses suggest that active and reserve component members receive similar disability ratings controlling for factors that enter the formal decision process formally and indirectly. To assess receipt of benefits, we estimated a multinomial logistic model, a technique that allows us to estimate the likelihood of placement in one of several categories controlling for additional factors. The model produces relative risk ratios that compare the relative odds of reserve component soldiers and active duty soldiers determined unfit for duty being placed into either one of two categories (severance pay or permanent disability retirement) rather than the base or referant category (separated without benefits). With controls, the relative risk ratio compares the odds of placement in the given category for similarly situated active and reserve component soldiers. A relative risk ratio of 1 indicates that reserve and active component members have equal odds of being placed in one category rather than the base category. A relative risk ratio of less than 1 for reserve soldiers indicates that reservists have lower odds than active members of placement in the category rather than in the base category, and a relative risk ratio of greater than 1 indicates that reservists have higher odds than active duty members of being placed in that category rather than in the base category. Because soldiers placed on the temporary disability retired list (TDRL) have not received a final benefits determination, they are excluded from the model. The relative risk ratios in table 5 demonstrate that among those declared unfit, reserve component soldiers have significantly lower odds than active component soldiers of receiving either permanent disability retirement or lump sum disability severance pay. Prior to controlling for other factors (our “gross effects” model), reserve soldiers have significantly lower odds than active component members of receiving either permanent disability retirement or severance pay rather than being separated without benefits—the relative risk ratios of 0.5 and 0.4 in the first row of the table respectively demonstrate reservists are only half or less than half as likely to receive permanent disability retirement or severance pay, respectively. This relationship persists after controlling for limited factors both relevant to and external to the official decision making process (“net effects” models), and in fact the estimated difference between reservists and active duty soldiers is in fact increased by the inclusion of variables such as race, sex and PEB location. While these additional factors do not directly enter the decision making process, they control for some of the administrative and demographic differences we observe between active and reserve component members. The relationship differs for the odds of receiving severance pay, where reserve soldiers have less than one third the odds of active soldiers, and the odds of receiving permanent disability retirement, where the odds of reservists’ receiving this type of benefit rather than separation without benefits is about one-tenth that of active component members. We lacked reliable electronic data on two potentially important factors. This inability to control for length of service and injuries existing prior to service prevents us from determining whether the differences presented above are warranted or defensible. Includes “presumed fit” cases. Federal Code 38 CFR Part 4: Veterans Affairs Schedule for Rating Disabilities Department of Defense DODD 1332.18 “Separation or Retirement for Physical Disability” DODI 1332.38 “Physical Disability Evaluation” DODI 1332.39 “Application of the Veterans Administration Schedule for Rating Disabilities” Army AR 40-400 “Patient Administration” AR 40-501 “Standards of Medical Fitness” AR 600-8-4 “Line of Duty Policy, Procedures, and Investigations” AR 600-60 “Physical Performance Evaluation System” AR 635-40 “Physical Evaluation for Retention, Retirement, or Separation” Navy SECNAV 1850.4E “Department of the Navy Disability Evaluation Manual” JAGINST 5800.7D “Manual of the Judge Advocate General” NAVMED P-117 “Manual of the Medical Department” Air Force AFI 36-2910 “Line of Duty (Misconduct) Determinations” AFI 36-3212 “Physical Evaluation for Retention, Retirement and Separation” AFI 41-210 “Patient Administration Functions” AFI 44-157 “Medical Evaluation Boards and Continued Military Service” AFI 48-123 “Medical Examination and Standards” This appendix compares Army disability evaluation outcomes in effect for active duty and reserve component service members as of August 2005. For the purpose of our analysis, we counted only final dispositions for service members initially placed on the temporary disability retired list (TDRL) and subsequently taken off that list when a final disposition was made by the Army’s Physical Evaluation Board (PEB). In these cases, we counted the final disposition in the year the initial TDRL decision was made. As a result, the tables in this appendix show fewer TDRL dispositions than the number issued by the PEB annually, according to the Army. The tables also show greater numbers of permanent disability retirement and other dispositions than the numbers reported by the Army PEB annually for the years 2001 through 2004. In each case, the differences are more pronounced in earlier years. Therefore, data in these tables do not represent the number of each type of disability disposition issued by the Army PEB annually. In addition to the contact named above, Clarita A. Mrena, Assistant Director; and Anna M. Kelley made major contributions to this report. In addition, Jason Barnosky, Melinda Cordero, Erin Godtland, and Scott Heacock served as team members; Lynn Milan, Anna Maria Ortiz, Doug Sloane, Mitch Karpman, and Wil Holloway provided guidance and assistance with design and analysis; Rachel Valliere advised on report preparation; and Roger Thomas provided legal advice. Veterans’ Disability Benefits: Claims Processing Challenges and Opportunities for Improvements. GAO-06-283T. Washington, D.C.: December 7, 2005. Military Personnel: Top Management Attention Is Needed to Address Long-standing Problems with Determining Medical and Physical Fitness of the Reserve Force. GAO-06-105. Washington, D.C.: October 27, 2005. Military Personnel: DOD Needs to Improve the Transparency and Reassess the Reasonableness, Appropriateness, Affordability, and Sustainability of Its Military Compensation System. GAO-05-798. Washington, D.C.: July 19, 2005. Federal Disability Assistance: Wide Array of Programs Needs to be Examined in Light of 21st Century Challenges. GAO-05-626. Washington, D.C.: June 2, 2005. Veterans’ Disability Benefits: Claims Processing Problems Persist and Major Performance Improvements May Be Difficult. GAO-05-749T. Washington, D.C.: May 26, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP Washington, D.C.: March 4, 2005. Military Pay: Gaps in Pay and Benefits Create Financial Hardships for Injured Army National Guard and Reserve Soldiers. GAO-05-125 and related testimony GAO-05-322T. Washington, D.C.: February 17, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 1, 2005. SSA’s Disability Programs: Improvements Could Increase the Usefulness of Electronic Data for Program Oversight. GAO-05-100R. Washington, D.C.: December 10, 2004. Veterans Benefits: VA Needs Plan for Assessing Consistency of Decisions. GAO-05-99. Washington, D.C.: November 19, 2004 and related testimony GAO-06-120T. Military Personnel: DOD Needs to Address Long-term Reserve Force Availability and Related Mobilization and Demobilization Issues. GAO-04-1031. Washington, D.C.: September 15, 2004. Social Security Administration: More Effort Needed to Assess Consistency of Disability Decisions. GAO-04-656. Washington, D.C.: July 2, 2004. SSA Disability Decision Making: Additional Steps Needed to Ensure Accuracy and Fairness of Decisions at the Hearing Level. GAO-04-14. Washington, D.C.: November 12, 2003. Defense Health Care: Army Needs to Assess the Health Status of All Early-Deploying Reservists. GAO-03-437. April 15, 2003 and related testimony GAO-03-997T Washington, D.C.: July 9, 2003. Veterans’ Benefits: Quality Assurance for Disability Claims and Appeals Processing Can Be Further Improved. GAO-02-806. August 16, 2002 and related testimony GAO-05-655T. Washington, D.C.: May 5, 2005. Defense Health Care: Disability Programs Need Improvement and Face Challenges. GAO-02-73. Washington, D.C.: October 12, 2001. DOD Disability: Overview of Compensation Program for Service Members Unfit for Duty. GAO-01-622. Washington, D.C.: April 27, 2001. Disability Benefits: Selected Data on Military and VA Recipients. HRD-92-106. Washington, D.C.: August 13, 1992.
The House Committee on Armed Services report that accompanies the National Defense Authorization Act of fiscal year 2006 directs GAO to review results of the military disability evaluation system. In response to this mandate, GAO determined: (1) how current DOD policies and guidance for disability determinations compare for the Army, Navy, and Air Force, and what policies are specific to reserve component members of the military; (2) what oversight and quality control mechanisms are in place at DOD and these three services of the military to ensure consistent and timely disability decisions for active and reserve component members; and (3) how disability decisions, ratings, and processing times compare for active and reserve component members of the Army, the largest branch of the service, and what factors might explain any differences. Policies and guidance for military disability determinations differ somewhat among the Army, Navy, and Air Force. DOD has explicitly given the services the responsibility to set up their own processes for certain aspects of the disability evaluation system and has given them latitude in how they go about this. As a result, each service implements its system somewhat differently. Further, the laws that govern military disability and the policies that DOD and the services have developed to implement these laws have led reservists to have different experiences in the disability system compared to active duty members. For example, because reservists are not on active duty at all times, it takes longer for them to accrue the 20 years of service that may be needed to earn monthly disability retirement benefits. While DOD has issued policies and guidance to promote consistent and timely disability decisions for active duty and reserve disability cases, DOD is not monitoring compliance. To encourage consistent decision making, DOD requires all services to use multiple reviewers to evaluate disability cases. Furthermore, federal law requires that reviewers use a standardized disability rating system to classify the severity of the medical impairment. In addition, DOD periodically convenes the Disability Advisory Council, comprised of DOD and service officials, to review and update disability policy and to discuss current issues. However, neither DOD nor the services systematically determine the consistency of disability decision making. DOD has issued timeliness goals for processing disability cases, but is not collecting information to determine compliance. Finally, the consistency and timeliness of decisions depend, in part, on the training that disability staff receive. However, DOD is not exercising oversight over training for staff in the disability system. While GAO's review of the military disability evaluation system's policies and oversight covered the three services, GAO examined Army data on disability ratings and benefit decisions from calendar year 2001 through 2005. After controlling for many of the differences between reserve and active duty soldiers, GAO found that, among soldiers who received disability ratings, the ratings of reservists were comparable to those of active duty soldiers with similar conditions. GAO's analyses of the military disability benefit decisions for the soldiers who were determined to be unfit for duty were less definitive, but suggest that Army reservists were less likely to receive permanent disability retirement or lump sum disability severance pay than their active duty counterparts. However, data on possible reasons for this difference, such as whether the condition existed prior to service, were not available for our analysis. GAO did not compare processing times for Army reserve and active duty cases because GAO found that Army's data needed to calculate processing times were unreliable. However, Army statistics based on this data indicate that from fiscal 2001 through 2005, reservists' cases took longer to process than active duty cases.
According to Commerce and DHS, the United States saw an increase of 19 million international travelers annually between 2011 and 2015, and additional spending by these travelers during this period supported 280,000 new American jobs. In fiscal year 2015, CBP officers processed more than 382 million travelers at air, land, and sea POEs, an increase of two percent from 2014. According to CBP, of the 382 million travelers who arrived in fiscal year 2015, more than 112 million international travelers arrived at U.S. airports, an increase of over five percent from 2014. In addition, according to CBP, international air travel experienced an estimated 28 percent growth from 2009 to 2015. According to reports from the Executive Office of the President and industry stakeholders, wait times for travelers processed by CBP and their overall travel experiences can have an impact on U.S. airports and airlines in domestic and international markets. Reducing wait times can help prevent missed flight connections for travelers, lower airline costs, and attract business to airports. Travelers can immediately share their experiences with the public through social media platforms, such as Twitter and Facebook, and can share compliments or complaints about long wait times or negative interactions with CBP officers. This could impact their and other travelers’ plans to travel to the United States, making the perception of CBP’s operations important to the travel industry. Travelers undergo a multi-step inspection process upon arrival at U.S. international airports. After a plane from a foreign airport arrives at a U.S. airport terminal, the plane blocks, or parks at a terminal gate, and travelers exit the plane into a sterile corridor that may include other gates for international arrivals but is generally separate from travelers arriving on domestic flights. At the end of the sterile corridor, travelers enter the Federal Inspection Service (FIS) area, which is a secure area of the airport where CBP inspects travelers applying for admission to the United States. Once in the FIS area, travelers are generally directed by signage and officials from the airport, an airline, or CBP officers who work in the FIS area to queue for inspection by CBP. The manner in which travelers proceed through the FIS area varies by airport, but generally travelers are queued by immigration or citizenship status type, such as U.S. citizens, Lawful Permanent Residents, Canadian citizens, and B1/B2 visa holders. CBP’s international arrivals process incorporates automated technology to help expedite travelers at passport control. Travelers must clear passport control, also referred to as primary inspection, where CBP officers inspect their travel documents and travelers are to declare any items required by law before they can be admitted into the United States. Travelers whose admissibility cannot be initially determined are referred for a more intensive, or secondary, inspection. After passport control, travelers enter the baggage claim area to retrieve their checked luggage. Once travelers retrieve their luggage, they must pass a final exit control checkpoint. At any point during the process, a CBP officer can refer a traveler to secondary inspection. In secondary inspection, CBP officers can further inspect the traveler’s travel documents and baggage. After passing exit control, travelers exit the FIS area into a non-sterile part of the airport terminal or to ground or airport transportation. Travelers can exit the airport or re-enter the sterile area of the airport through the Transportation Security Administration security checkpoint to make a connecting flight. Figure 1 shows the inspection process for travelers arriving at U.S. international airports. In addition to CBP, other government agencies and private entities are stakeholders in the international arrivals process at U.S. international airports. These stakeholders can include a unit of the local government, such as airport authorities, domestic and foreign airlines, terminal operators who manage a terminal on behalf of local governments, the Centers for Disease Control and Prevention, and law enforcement agencies, among others. For example, at John F. Kennedy International Airport (JFK) in New York, which has five international arrivals terminals, three terminals are managed by individual airlines, one terminal is managed by a terminal operator, and one terminal is operated by an association of four airlines that use the terminal. The entity that manages the airport or international arrivals terminal(s) maintains the facility and must work with CBP to meet its standards for airport design and operation laid out in CBP’s Airport Technical Design Standard, and meet all other federal regulations. While CBP maintains control over most aspects of the FIS area, it relies on the managing entity for infrastructure changes, retractable belts and stanchions used to help queue passengers for inspection, and most signage in the FIS area, among other items. In January 2006, the Department of State (State) and DHS established a joint effort to help streamline the international arrivals process and facilitate travel for legitimate travelers. In 2007, CBP launched the pilot Model Ports program at George Bush Intercontinental Airport (IAH) and Washington Dulles International Airport (IAD). Following this effort, the program was formalized under the Implementing Recommendations of the 9/11 Commission Act of 2007, which mandated that the Secretary of Homeland Security establish a model ports of entry program for the purpose of providing a more efficient and welcoming international arrivals process in order to facilitate and promote business and tourism travel to the United States, while also improving security. This act required CBP to include program elements that would enhance queue management in the FIS area leading to primary inspection, assist foreign travelers once they have been admitted, and offer instructional videos in English and other languages, as deemed appropriate, in the FIS area to explain the inspection process and feature welcome videos. In addition, a portion of CBP’s fiscal year 2008 appropriation was made available for the agency to implement the program at the 20 U.S. airports with the highest number of annual foreign visitors as of 2007 and hire 200 additional officers for the 20 busiest U.S. international airports. In 2008, CBP expanded the Model Ports program to an additional 18 airports. According to OFO officials, OFO designed the program to welcome travelers to the United States and streamline the international arrivals process by improving training, signage, and using technology to facilitate entry. OFO collaborated with other DHS components, interagency government partners, and private and public stakeholders to develop and implement solutions that would facilitate travelers. The goals of the Model Ports program were to (1) ensure that passengers entering the United States were welcomed by CBP officers who treat them with respect and understanding; (2) provide the right information to help travelers, at the right time and in a hospitable manner; (3) create a calm, pleasant waiting area; and (4) streamline the customs process. During the Model Ports program, CBP sought to provide international travelers with more helpful information on what to expect, how to request help, and where to submit their comments or concerns. Among other things, the Model Ports program implemented a customer service professionalism program; improved wait time monitoring and reporting; improved diplomatic arrival processes and dedicated diplomatic processing lanes; formalized CBP’s coordination with stakeholders regularly to discuss shared responsibilities; set goals and monitored progress; implemented audio and video technology in the queuing area of passport control; and developed new signage. CBP worked to enhance its queue management techniques and began to implement other traveler facilitation programs and technologies, which are discussed later in this report. In its final report to Congress on the Model Ports program in 2010, CBP highlighted program accomplishments, including employee training, recognizing employee exemplary performance, disseminating entry requirements to international travelers via CBP’s website, and developing the Airport Wait Time Console to allow CBP management to review and analyze data on arriving international flights and wait times, among other accomplishments. While the program ended in 2010, OFO continued to implement the elements of the program as standard practices across all U.S. international airports. In 2012, the President announced the National Travel and Tourism Strategy for expanding travel to and within the United States. This strategy established a goal of attracting 100 million international visitors to the United States annually by 2021 to generate an estimated $250 billion on an annual basis. The strategy included instructions for federal agencies the strategy identified as taking part in the travel and tourism industry, including instructions for monitoring and evaluating results by, among other things, developing key performance metrics and accountability measures to evaluate progress on goals and identifying issues needing corrective action. In May 2014, the President issued a Presidential Memorandum directing the Secretaries of Commerce and Homeland Security to establish a national goal and develop airport- specific action plans to enhance the arrivals process for international travelers to the United States. In February 2015, Commerce and DHS released a report to the President that defined a national goal to “provide a best-in-class international arrivals experience, as compared to global competitors, to an ever-increasing number of international visitors while maintaining the highest standards of national security.” Commerce and DHS developed this goal through consultation with leaders from the airline industry, airport authorities, state and local governments, and other customer service industry leaders. CBP and Commerce worked to establish the metrics and processes necessary to support ongoing improvement directed in the President’s strategy. For example, CBP worked with airports, airlines, and industry associations to develop airport-specific action plans for the 17 busiest U.S. international airports that included steps to drive innovation and increase security while streamlining the entry process. As shown in figure 2, these 17 airports include all Model Ports program airports except McCarran International Airport (LAS) in Las Vegas; Orlando Sanford International Airport (SFB) in Sanford, Florida; and San Juan-Luis Munoz Marin International Airport (SJU) in Puerto Rico; and accounted for over 73 percent of all international travelers to the United States in 2014. CBP updates the action plans and reports on performance metrics quarterly and makes these updates available on its public website. For these airports, CBP publishes metrics, such as average monthly travel volume and wait times, through terminal-level informational “dashboards.” In addition, Commerce and DHS established a new interagency task force, co-chaired by the Deputy Secretaries of Homeland Security and Commerce, to engage with industry stakeholders to identify the key factors that drive a traveler’s perception of the international arrivals experience and decision to travel to the United States, among other things. CBP and airport and airline stakeholders jointly implement a number of travel and tourism facilitation initiatives at U.S. international airports. In general, to implement these initiatives, CBP develops the requirements or standards for initiatives, approves the implementation, determines which travelers are eligible to use them, and transmits traveler data to its systems that it uses to conduct inspections. Figure 3 provides a description of CBP airport travel and tourism facilitation initiatives being implemented by CBP and stakeholders at U.S. international airports as of the end of fiscal year 2016, including initiatives begun under the Model Ports program. Move mouse over initiative names to see a photo of the initiative. For noninteractive version, see app. II. Mobile Passport Control (MPC) Program that allows eligible travelers to use a self-service kiosk to scan their passport, take a photograph, and answer a series of questions to verify biographic and flight information during the CBP inspections process. The kiosks issue a receipt to travelers, who bring their receipts and their passports to a CBP officer to finalize their inspection. Program in which travelers can use an application on their mobile device to populate and submit their passport information, customs questions, and upload a self-photo prior to entering the FIS area. Travelers scan their mobile device with a CBP officer to complete the inspections process at passport control. New process at Federal Inspection Service (FIS) areas in new terminals that allows travelers to claim their checked baggage before completing passport control, eliminating the exit control point. None of the 17 busiest U.S. international airports have implemented baggage first yet. Pilot program that modifies the CBP exit control checkpoint in the FIS area. After being inspected at passport control and retrieving their baggage, travelers can leave the FIS area unless stopped by a CBP officer monitoring the baggage claim area. Designated lanes at passport control for diplomats and foreign dignitaries to expedite the CBP inspections process. These lanes were first established during the Model Ports program. Process that expedites the movement of international travelers that are either en-route to a foreign destination at an airport that has the International to International baggage program or that have no checked baggage to claim. These travelers use an expedited lane at passport control and a separate exit out of the FIS, allowing them to bypass baggage claim and the exit control point. Television monitors that display signs and multimedia that detail what travelers can expect when they arrive in the FIS area and welcome travelers to the United States, among other messages. CBP initially installed the television monitors at the 20 Model Ports program airports in 2006 and 2007. Since then, CBP and airport stakeholders have continued to provide television monitors. Professionalism Service Manager (PSM) program Focuses on professionalism standards and customer service within CBP and with the public and external stakeholders at each U.S. international airport. Each U.S. international airport has at least one PSM that promotes awareness of CBP’s mission and manages and responds to compliments, complaints, and other feedback at the airport. Process by which travelers queue in serpentine lines and are directed to the next available booth or kiosk by a queue manager, rather than individually selecting a parallel line to complete the CBP inspections process. Reimbursable Services Program fee agreements Subject to certain criteria, CBP is authorized to enter into reimbursable service agreements to cover costs, including overtime, associated with customs, immigration inspection-related, border security, and agricultural processing services at ports of entry. Program that facilitates the processing of travelers with closely scheduled connecting flights. Participating airlines identify and direct travelers to specially designated booths at passport control to reduce the number of missed connections. At least monthly meetings with all of CBP’s airport stakeholders to discuss shared responsibilities, goal setting, and progress monitoring. CBP’s airport stakeholders include airline station managers and airport managers. These meetings began during the Model Ports program and continue today. Program that expedites the inspections process for preapproved, low-risk travelers. Travelers use self-service kiosks to scan their passports or U.S. permanent resident cards, submit their fingerprints, and complete their customs declaration. Electronic monitors that provide wayfinding direction and additional information to assist travelers in determining how to proceed through the FIS area. Pilot program in which airports forward the baggage of travelers en-route to a foreign destination to the departing aircraft so that they do not claim their baggage in the FIS area. Stakeholders, such as airport and terminal operators, choose which initiatives to implement and pay for most of the initiatives and associated infrastructure and maintenance costs. For example, CBP provides the technical and business requirements for Automated Passport Control (APC) kiosks, including requiring stakeholders to coordinate with CBP, specifying that they are to communicate and receive secure messages, and requiring that they meet language requirements, among others. In turn, stakeholders are responsible for any remodeling of the FIS facility, purchasing the kiosks, maintaining the kiosks (including replenishing paper), and providing the necessary infrastructure, such as Ethernet cabling and power connection. As shown in figure 4, CBP and stakeholders have rolled out the implementation of initiatives at U.S. international airports beginning in 2006 through the present. Various airport-specific factors can affect whether and how airports implement travel and tourism facilitation initiatives. These factors include the size and layout of the FIS facility, the infrastructure needed to support initiatives in the FIS facility, the willingness and ability of the airport stakeholders to pay for initiatives or pay for infrastructure to support them, and stakeholder discretion in how best to implement initiatives. Some terminals do not have the appropriate infrastructure, size, or layout to support the implementation of initiatives in the FIS facility. For example, during our site visits we observed APC kiosks located inside the FIS area in some terminals and in sterile corridors at other terminals, based on space constraints. We also observed APC kiosks in different configurations, including single and multiple columns, due to the size and layout of FIS areas and sterile corridors. In addition, according to CBP officials, not all airports have the space available to create a separate exit for travelers who could utilize One Stop, and the current Airport Technical Design Standard, which was established in 2012, does not allow for easy transitions to a baggage first concept. Finally, while MPC remains in the pilot phase and CBP continues to roll it out among U.S. international airports, the initiative requires internet connectivity, meaning the traveler needs either data on their cell phone or wireless internet connection. Some airports have taken steps to provide free wireless internet access to enable MPC to be implemented. As previously discussed, airport authorities, airlines, and terminal operators have the option of implementing initiatives at the airport or terminal depending on the airport. According to stakeholders that we spoke with during our site visits, one deciding factor is the willingness and ability of the airport stakeholders to pay for initiatives or infrastructure to support them, except Global Entry which is paid for by CBP user fees. Some of the initiatives, such as APC kiosks, can be costly because they require infrastructure changes, hardware investment and maintenance, and personnel to support them, while others, such as MPC, are less costly because a third-party provides the mobile phone application and the airport or terminal operator pays for phone scanners and wireless internet access. The airport’s status as a destination or a hub airport can also impact stakeholder decisions to invest in these initiatives. A destination airport is an airport where most travelers plan to stay in the region and do not have a connecting flight. A hub airport is an airport where most travelers connect to another airport in the United States or abroad to complete their trip. According to officials that we spoke with during our site visits, stakeholders generally have an incentive to pay for the initiatives at a destination airport so travelers have a welcoming experience and choose to spend time at in-airport retailors, while stakeholders generally have an incentive to pay for the initiatives at hub airports to ensure that travelers make their connecting flights. For example, airlines implement the Express Connection and International to International baggage programs at hub airports to assist travelers in making their connecting flights, which helps with traveler satisfaction and prevents the airlines from incurring rebooking costs. The implementation of some of the initiatives can also vary by terminal or airport. For some initiatives, implementing partners have more discretion over how they are implemented, which allows stakeholders to implement their own design preferences. For example, Detroit Metropolitan Wayne County Airport (DTW) and Miami International Airport (MIA) are piloting the modified egress initiative differently. In Detroit, which is a one-level FIS facility, travelers exiting the FIS area are slowed by a serpentine flow and CBP officers retrieve the baggage of travelers who are referred to secondary inspection. At Miami North Terminal, which is a two-level FIS facility, travelers who are referred to secondary inspection are segregated from cleared travelers by Plexiglas barriers immediately after primary inspection so that they can proceed to retrieve their own baggage from the secure side of the Plexiglas barrier and then self-report at secondary inspection, as shown in figure 5. Another initiative that varies across airports and terminals is the use of color-coded signage and queueing, as shown in figure 6. For example, three terminals at John F. Kennedy International Airport (JFK) use color- coded signage, but all use different color schemes to identify different traveler types and technology initiatives, and only one of these terminals also uses color-coded retractable belts to complement the color-coded signage. Similarly, the color scheme at MIA North Terminal, also known as Terminal D, is different than the color scheme at Dallas/Fort Worth International Airport (DFW). Because color-coded signage is not a CBP- led initiative, implementing partners have more flexibility to implement this initiative how they prefer. Another example of variation across airports is the different versions of the APC kiosks that vary depending on the vendor that the airport chooses and in which phase the airport implemented the kiosks, as shown in figure 7. In addition to private vendors, airport authorities such as Dallas/Fort Worth International Airport (DFW) and George Bush Intercontinental Airport (IAH) have developed their own APC kiosks to generate revenue. CBP has rolled out the APC program in four phases of eligible users, to include: (1) U.S. citizens, (2) Canadian citizens, (3) U.S. Lawful Permanent Residents, and (4) B1/B2 visa holders. As a result, APC is at phase four in some airports, while in phase one, two, or three at other airports. In addition, according to OFO officials, CBP plans to update its Airport Technical Design Standard to include, among other things, a baggage first concept for all new airport facilities built in the future. As previously discussed, this process allows travelers to claim their checked baggage before completing passport control, modifying the CBP exit control checkpoint. New facilities at smaller airports, Austin-Bergstrom International Airport (AUS) and Houston Hobby International Airport (HOU), have incorporated this process into the design. However, this update to the Airport Technical Design Standard would not have an impact on existing facilities, and due to infrastructure constraints and current FIS area configurations with baggage carousels located between passport control and exit control, the baggage first concept is not possible for many existing facilities. The modified egress pilot program is more flexible than the baggage first concept in that it does not require significant infrastructure modifications, such as moving the baggage carousels before passport control. CBP launched its modified egress pilot program for existing facilities to streamline the inspection process, which as previously discussed, modifies the CBP exit control checkpoint in the FIS area. Five terminals at the 17 busiest U.S. international airports have piloted modified egress, and their implementation varies based on their specific infrastructure constraints. Figure 8 below shows the evolution of the CBP air traveler inspection process from the current process to modified egress to baggage first. Table 1 includes these additional stakeholder initiatives, such as color- coded queuing and signage and expected wait time monitors in the FIS, and provides information on the prevalence of airport travel and tourism facilitation initiatives at the 31 terminals in the 17 busiest U.S. international airports. As of the end of fiscal year 2016, the 31 terminals at the 17 busiest U.S. international airports had a total of 1,014 APC kiosks and 408 Global Entry kiosks to help facilitate CBP processing of travelers for primary inspection, based on CBP data. See appendix III for additional information about the implementation of initiatives at international arrivals terminals at the 17 busiest U.S. international airports. OFO has developed two internal airport travel facilitation goals: (1) improving customer service levels for international arrivals and (2) maintaining or reducing wait times. According to CBP, it evaluates progress towards its goal of improving customer service levels for international arrivals through its traveler satisfaction surveys and stakeholder feedback on how CBP can improve the parts of the arrivals process that are under CBP’s control, its dashboards for the 17 busiest U.S. international airports, online comment cards inputted into CBP’s Complaint/Compliment Management System, and input from stakeholders. CBP’s most recent traveler satisfaction survey in 2016 suggested there was an association between reported wait times and traveler satisfaction, and the percentage of survey respondents who felt their process time was short or reasonable was 96 percent. In addition, the 2016 survey report suggested that there was an association between perceptions of officer professionalism and traveler satisfaction, and the percentage of survey respondents who felt satisfied with CBP officers was 96 percent. Further, each airport’s PSM receives and is to review comments from the Complaint/Compliment Management System and work with CBP officials at headquarters and his or her airport to address comments and complaints. Additionally, PSMs can address traveler complaints and compliments in person on the scene of an incident that has occurred during the CBP inspection process, or through telephone or email after the traveler has left the airport. OFO measures progress towards its goal of maintaining or reducing wait times, as we discuss later in this report, by monitoring wait times, holding monthly meetings, and conducting studies, among other things. OFO officials said that they have met its goal to maintain or reduce wait times based on OFO’s analysis of wait time data that OFO said shows that its wait times decreased more than three percent in 2015 despite a five percent increase in traveler volume. In addition, officials said that OFO’s analysis of wait time data shows that international arrivals increased by six percent in fiscal year 2016 but wait times were about the same as in 2015. CBP attributes meeting its wait time goal to the implementation of technology initiatives such as APC kiosks which expedite passport control for eligible travelers. According to CBP headquarters officials, the agency uses the Workload Staffing Model (WSM) to help determine staffing requirements and make allocation decisions for CBP officers at POEs, including airports. As part of its Resource Optimization Strategy, the WSM is an analytical, data- driven staffing tool designed to inform CBP officer allocation decisions regarding current and future officer staffing at POEs. CBP conducts WSM calculations annually and publishes its CBP-wide calculation for all of its POEs in its annual reports to Congress. CBP officials at headquarters conduct the calculations for each POE within a field office and provide this information to the field office annually when it allocates new officers. The port director has discretion to determine how to allocate officers among his or her ports within the POE. Headquarters officials do not direct port directors on how to manage staffing allocations to the ports. In determining staffing needs at the POEs, the WSM takes into account the frequency of all key CBP officer activities; the processing time to complete each activity; available hours per officer; port-specific factors required to ensure coverage; and future requirements related to new facilities, technologies, or service requirements. The estimated process time for each POE accounts for different risk factors among the POE, the additional workload created when officers send a traveler to secondary inspection, and the impact of travel facilitation initiatives, such as APC and MPC on processing time. Officials who conduct the calculations must also manually enter data to ensure coverage of exit control at airports, for which CBP does not track process time or wait time. In addition, CBP officials at headquarters add on allocation of core overtime (which is discussed later in this section), projected officers needed for new facilities, and changes to account for growth in traveler volume and use of business transformation initiatives such as APC. For example, when a new FIS facility is built, the field office develops an estimate of projected workload to give to CBP headquarters. These add- on calculations can increase or decrease the total number of officers needed based on the WSM calculation. For example, an airport that is opening a new terminal in the next year may need additional officers as a result of the add-on calculation, but an airport that implemented APC kiosks in the previous year may need fewer officers as a result of the add- on calculation. While these additional factors are not included in the WSM calculation, officials at headquarters have developed a methodology to provide an estimate of additional or fewer officers needed at the POE based on prior experience. This is added to the WSM calculation. Figure 9 describes these calculations. In 2014, the DHS Office of Inspector General conducted a review of the reliability of the WSM in determining the number of CBP officers needed to fulfill CBP mission requirements. The DHS Office of Inspector General found that the WSM had a sound methodology to determine its officer staffing needs and to identify staffing shortages, but made recommendations to strengthen the internal controls over the model. CBP concurred with the recommendations and plans to complete steps to implement them by December 2016. According to CBP’s Assistant Commissioner for Human Resources Management, staffing is one of the most prominent challenges facing the agency. CBP needs an additional 2,107 officers for fiscal year 2017 across all POEs, according to CBP’s Deputy Assistant Commissioner. While a portion of CBP’s fiscal year 2014 appropriation was made available for hiring at least 2,000 new CBP officers to help address staffing needs, the agency has been able to hire and onboard a net increase of 1,135 officers due to attrition and hiring challenges, according to CBP officials. According to CBP officials, these challenges include competition from other federal and state law enforcement agencies and a lengthy hiring and onboarding process that includes polygraph tests and several months of training. CBP is studying these hiring challenges and taking steps to address them. For example, according to CBP’s Assistant Commissioner for Human Resources Management, CBP has begun initiatives aimed at decreasing the amount of time it takes for an applicant to complete the hiring process, increased the number of recruiting events, and coordinated with the Department of Defense to recruit qualified veterans and individuals separating from military service. As shown in table 2, OFO supervisors at the airports use a variety of tools, overtime, and other strategies to manage staffing daily, weekly, and seasonally at the 17 busiest U.S. international airports. In addition to these tools, CBP managers at headquarters and in the field must consider several airport-specific factors that affect how they are able to manage staff at airports. For example, if an airport is located in a POE with more than one port, or more than one international terminal, local CBP operations require CBP to split its available staff and staff may spend time traveling between ports or terminals. This can affect the total number of hours an officer is available to process travelers during his or her shift, which requires managers to plan daily staffing with these periods of time in mind. In addition, to supplement staffing during peak travel hours, managers may assign officers to work overtime or reassign officers where needed. For example, Fort Lauderdale-Hollywood International Airport (FLL) shares its CBP officers with the Fort Lauderdale sea port to process cruise ship and other arriving sea traffic. Officers drive between the airport and sea port to meet the peak traveler volumes at both facilities. Another factor that can affect how OFO supervisors manage staffing at the airports is how often flights from destinations with high-risk profiles arrive at the airport. It takes officers longer to inspect travelers arriving on these flights due to the higher percentage of travelers that CBP refers to secondary inspection and takes adverse actions, such as seizures and arrests. As a result, CBP uses more resources for these flights to facilitate the flow of legitimate travelers. Airport and airline stakeholders can pay for CBP officers to work overtime during peak travel hours or outside regular operational hours at the discretion of port leadership. CBP has reimbursable service agreements under the Reimbursable Services Program at 11 airports, as discussed previously, to cover the costs of certain CBP services, including overtime. CBP has entered into reimbursable services agreements with stakeholders under Section 560 for services at Dallas/Fort Worth International Airport (DFW); George Bush Intercontinental Airport (IAH); and Miami International Airport (MIA). In addition, CBP has entered into reimbursable services agreements with stakeholders under Section 559 for services at Boston-Logan International Airport (BOS); Fort Lauderdale-Hollywood International Airport (FLL); Honolulu International Airport (HNL); John F. Kennedy International Airport (JFK); Los Angeles International Airport (LAX); Orlando International Airport (MCO); Philadelphia International Airport (PHL); and San Francisco International Airport (SFO). Table 3 provides a brief description of these agreements at each airport. According to CBP, from fiscal years 2014 through 2016, CBP processed nearly 2.7 million travelers at the 11 airports as a result of reimbursable service requests. Additionally, as of July 2016, of the approximately 195,000 reimbursable service hours worked for all POEs, 77 percent were worked at airports. According to OFO officials, reimbursable service agreements do not have an impact on the allocation of overtime from CBP headquarters to the POEs. Rather, they represent a commitment to provide new or enhanced services and to augment existing services. Airport and airline stakeholders at airports also provide staffing resources associated with some of the initiatives, such as APC kiosks and MPC and to support the increasing traveler volume. These staffing resources include: (1) ambassadors or assistants that direct travelers to the appropriate queue, assist travelers using APC kiosks, assist travelers with the MPC application, and help travelers to make their connecting flights; (2) interpreters to assist CBP officers process travelers who do not speak English; and (3) technicians that maintain APC kiosks, including replenishing paper and correcting any malfunctions. Airport and airline representatives at the airports we visited told us that they were already providing some of these staff, including airport ambassadors and interpreters, before the implementation of CBP’s airport travel and tourism facilitation initiatives so that CBP officers could focus on processing travelers. These officials said, in recent years, they have increased the number of staff they employ in order to facilitate the increase in traveler volume and the implementation of initiatives such as APC kiosks. According to airport and airline representatives at the airports we visited, in recent years CBP has increased its use of public- private partnerships, which has resulted in variation of available overtime services among airports. Some of these stakeholders said they are concerned about their own ability and willingness to provide these resources in the future. CBP officials acknowledged the increase in use of public-private partnerships in recent years, and told us that they are a result of significant increases in traveler volume entering the United States. According to CBP officials, maintaining or reducing wait times is an important CBP travel facilitation goal. As such, CBP monitors and manages airport wait times. On a daily basis, CBP collects data at airports that it uses to calculate wait times. CBP defines wait time as the time interval between the arrival of the aircraft (the block time) and the swipe of a passport by the traveler at an APC kiosk, Global Entry kiosk, MPC scanner, or by a CBP officer at a passport control booth or podium, minus the walk time to the FIS area. Walk time is an estimate of the average amount of time it takes an average traveler to walk from the aircraft to the FIS entrance. The walk time is facility-dependent and varies by airport terminal. CBP electronically collects two data points for wait time calculations: the block time and the passport swipe time. CBP measures wait time for the primary inspection process only. Figure 10 shows the CBP airport wait time calculation process. According to CBP officials and airport and airline representatives, flight arrivals and wait times can vary throughout the course of the day, by day of the week, and by season. In addition, various factors can affect wait times, including traveler volume exceeding FIS capacity; concurrent or overlapping fight arrivals; co-mingling of travelers in the FIS area from earlier flights; the number of high-risk travelers; arrivals of large numbers of visitors; technology issues such as computer network outages and slowdowns and malfunctions in equipment and facilities; unscheduled flight diversions due to inclement weather conditions; the implementation of initiatives (i.e., APC kiosks, Global Entry kiosks, and MPC); CBP officer staffing and airport and airline ambassador staffing; and whether airports provide timely interpretation and wheelchair services to travelers. For example, when traveler volume exceeds FIS capacity, CBP or airport representatives at some airports can hold travelers on the aircraft until space in the FIS becomes available or, if available, CBP or airport or airline representatives can queue travelers in a waiting room in the sterile corridor before proceeding to the FIS area, such as in Orlando International Airport (MCO). In addition, concurrent or overlapping flight arrivals or unscheduled flight diversions due to inclement weather conditions could result in co-mingling of travelers in the FIS area from previous flights. Co-mingling of travelers refers to instances when travelers from one flight may queue in line behind travelers from an earlier or later flight, which affects the traveler’s individual wait time and can affect the overall wait time for that traveler’s flight. Further, the processing of large numbers of visitors may increase wait times because they often cannot use technology initiatives that expedite primary inspection, such as APC and Global Entry kiosks, and take longer to inspect at CBP officer booths than other types of travelers. Moreover, wait times could increase if airport or airline representatives do not provide timely interpretation or wheelchair services to travelers when needed. CBP has undertaken various efforts to manage, monitor, or reduce airport wait times. On a daily basis, CBP port-level supervisors are able to monitor airport wait times in near-real time using the Airport Wait Time Console, an automated system that provides current, and forecasts future, international flight and traveler arrivals data. Using the console, CBP is able to monitor the wait time at primary inspection for each individual traveler and the combined average wait time for all travelers on a flight. This information helps CBP supervisors identify and respond to unexpected surges and overloaded queues in the FIS areas that can occur due to weather delays, among other reasons. In response to such situations, CBP supervisors may decide to open additional primary inspection booths, shift staff assignments, or use overtime to help manage wait times. CBP and stakeholders at all 17 busiest U.S. international airports also conduct at least monthly meetings to discuss airport operations and travel facilitation issues such as options for modernizing facilities, flight schedules, use of available staff and technology, and management of wait times. In its monthly airport travel and tourism dashboards for the 17 airports, OFO reports trends in wait times at each terminal and compares wait times among terminals, among other things. According to OFO, it publishes the dashboards, in part, to provide transparency and help facilitate discussion with airport stakeholders at monthly meetings. OFO also monitors wait times at the headquarters level through its Planning, Program Analysis, and Evaluation Directorate to identify patterns or trends of increasing or excessive wait times. At times, OFO has sent Operational Review Teams, also referred to as “jump teams,” to airports with long wait times, including Boston-Logan International Airport (BOS), Honolulu International Airport (HNL), and San Francisco International Airport (SFO), to review operations and make recommendations to help reduce wait times. For example, in 2015, OFO sent a team to review wait times, staffing, and overtime at Honolulu International Airport (HNL). The team identified contributing factors impacting wait times, including the lack of APC kiosks which delayed processing during peak arrival periods, and made recommendations to CBP and HNL stakeholders. In February 2016, HNL implemented 32 APC kiosks. In May 2016, the CBP acting port director for the Port of Honolulu said that he had seen a significant reduction in average wait times, excessive wait times, and gate holds at HNL. According to our analysis of CBP airport wait time data, wait times decreased an average of 5 minutes for U.S. citizens and 12 minutes for visitors in the first 3 months after the implementation of APC kiosks at HNL. Similarly, after Operational Review Teams visited BOS and SFO, wait times decreased despite an increase in traveler volume, according to our analysis of CBP airport wait time data. In response to the National Travel and Tourism Strategy, OFO also contracted with a private company to conduct Time and Motion studies and full operational analyses of operations at the 17 busiest U.S. international airports in 2014 and 2015. The studies encompassed all elements involved in the inspection of travelers (processes, infrastructure, technology, signage, etc.) from the time travelers disembark the aircraft until they exit the FIS. In these studies, the private company provided recommendations to each airport for how CBP, the airport, and the airlines could improve processes and reduce wait times. According to OFO officials, CBP and stakeholders generally reviewed and implemented the recommendations at airports. For example, the study of the Miami International Airport (MIA) North Terminal in September 2014 identified operational issues, including congestion in the FIS and egress areas. To reduce congestion, MIA re-positioned APC kiosks from the FIS area to the sterile corridor and CBP implemented modified egress. According to CBP officials, the agency is also continuing to develop its new Border Facilities Analytic Modeling and Simulation tool to help airport stakeholders design and implement initiatives for new and existing airport facilities. The tool allows OFO to run model scenarios to conduct “what-if” simulations, assess potential initiatives for impacts to operations, and evaluate benefits of policy, process, and facility changes post- implementation, among other purposes. For the air entry environment, users can enter various inputs on traveler type, volume, and the flow process and obtain and visualize customizable outputs, such as flight processing times and traveler wait times. As of October 2016, OFO has used the tool to help inform the design of the new baggage first terminals in Fort Lauderdale-Hollywood International Airport (FLL) and Seattle- Tacoma International Airport (SEA). According to CBP officials, in the future CBP may use the tool to help determine the initiatives that would need to be implemented at airports to maintain or reduce wait times. CBP reports its airport wait time data on its public website to help travelers plan flights, including scheduling connecting flights, but the data has limited usefulness to travelers. Currently, CBP does not report wait times by traveler type, such as U.S. citizen or foreign visitor. Rather, CBP reports average hourly wait times for all travelers on arriving international flights to clear passport control. By reporting airport wait times for all categories of travelers combined, CBP is reporting wait times that are lower than those generally experienced by visitors. As shown in figure 11, according to our analysis of CBP wait time data for the 17 busiest U.S. international airports from May 2013 through August 2016, the average wait time was 13 minutes for U.S. citizens and 28 minutes for visitors, while the reported combined average wait time was 21 minutes. As shown in figure 12, the average wait time for visitors was higher than the average wait time for U.S. citizens at all 17 airports. For example, at John F. Kennedy International Airport (JFK) Terminal 1 from May 2013 through August 2016, the average wait time was 16 minutes for U.S. citizens and 38 minutes for visitors. Wait times are generally higher for visitors than U.S. citizens because CBP officer inspection at passport control can take longer for visitors than for U.S. citizens and they may not be able to use automated technology that expedite the inspection process for travelers. Our analysis of CBP wait time data for the 17 busiest U.S. international airports from May 2013 through August 2016 show similar differences in wait times between U.S. citizens and visitors during both peak and nonpeak travel seasons. As shown in figure 13, the average wait times during the peak summer travel season from June through August each year was 14 minutes for U.S. citizens, while the average wait time was 29 minutes for visitors. Similarly, the average wait time for this same period during the nonpeak travel season between September and May each year was 12 minutes for U.S. citizens and 27 minutes for visitors. As shown in figure 14, the average wait time for visitors during the peak summer travel seasons was higher than the average wait time for U.S. citizens at all 17 airports. For example, at Orlando Airside 4 from May 2013 through August 2016, the average wait times during the summer peak travel season for U.S. citizens was 10 minutes and for visitors was 34 minutes, a difference of 24 minutes. See appendix IV for our detailed analysis of CBP airport wait time data. Standards for Internal Control in the Federal Government states that management should use quality information and externally communicate the necessary quality information to achieve the entity’s objectives. In addition, OFO’s internal airport travel facilitation goals are improving customer service levels for international arrivals and maintaining or reducing wait times. CBP’s public reporting mechanism is not currently set up to report wait times by traveler type. However, CBP monitors and reports wait times by traveler type for internal management purposes. CBP officials acknowledged the benefits to travelers of reporting wait time data by traveler type and said that it would be feasible to program the reporting mechanism to do so. Reporting wait times by traveler type could improve the usefulness of CBP’s wait time data to travelers by providing them with more complete and accurate data on their wait times to help inform their flight plans, including scheduling connecting flights. In addition, it could provide additional transparency to allow CBP to work with stakeholders to determine how to improve the traveler experience and manage wait times. In February 2016, CBP was required to begin publishing live wait times in real time for travelers entering the United States at the 20 busiest U.S. international airports on CBP’s public website. CBP faces technology challenges in meeting these reporting requirements, but is taking steps to be able to collect the data needed to do so. According to CBP officials, to meet these new requirements, CBP will need to collect live wait time data for the entire international arrivals process and report wait times in real time to its public website. Figure 15 highlights some of the data collection challenges CBP faces in meeting these requirements. As the figure shows, while CBP currently collects data needed to calculate wait times for primary inspection, CBP does not collect data for the remaining parts of the international arrivals process to include baggage delivery—a process controlled by the airlines, not CBP. Specifically, CBP does not collect wait time data for travelers at the point where they enter the FIS area or after their passports are swiped at an APC or Global Entry kiosk or by an officer at passport control to include the time spent retrieving baggage, queuing for CBP exit control, or exiting the FIS. According to CBP officials, the agency currently does not have an automated system or technical means to generate time stamps electronically at these points in the arrival process. CBP also faces challenges in reporting wait time data in real time to its public website because of the time required to vet the data for accuracy. Currently, CBP takes about 2 business days to publish airport wait time data because it must electronically test and manually review the data to ensure accuracy. Steps taken by CBP include removing data for any refugees and asylum seekers, and three percent of travelers with the longest wait times. CBP also manually corrects or excludes anomalies that can be caused by inaccurate block times, cancelled flights, and travelers who do not make their way to the FIS area immediately after deplaning, among other reasons. According to CBP, travelers who do not make their way to the FIS area immediately after deplaning may go to the restroom, wait for wheelchair services, or do other things that delay their arrival to the FIS area. These are important factors to consider in looking for ways to improve the usefulness of reported airport wait time data. CBP is taking steps to overcome these challenges and determine how to implement these requirements by, among other things, collaborating with the DHS Science and Technology Directorate (S&T) to explore, test, and evaluate a mix of commercially-available automated technologies for collecting wait times at various points in the inspection process. These technologies include Bluetooth and Wi-Fi technologies, people counter systems, and Radio Frequency Identification technology, among others. The DHS Apex Air Entry/Exit Re-Engineering program is a multi-year effort, in part, to improve the international arrivals process. Since 2015, S&T and CBP have been developing a Counting and Measuring project at S&T’s Maryland Test Facility in Upper Marlboro, Maryland. The project is intended to evaluate the accuracy and efficiency of commercially- available automated tools to monitor the number, flow, and location of travelers to determine the wait times and dwell times of travelers throughout FIS areas. Dwell time is the measure of the time a traveler spends at each stage of the process (e.g., the time the travelers spends in a line versus the time the traveler spends waiting for a bag at baggage claim). The project is also intended to provide accurate and real-time projected wait time information to travelers as they enter the FIS. S&T and CBP previously planned to operationally test the project at Washington Dulles International Airport (IAD) for 3 months starting in April 2017. However, CBP is in the process of designating a new test location. The operational test will go forward once CBP, S&T, and an airport agree on the new location. Given that as of March 2017, CBP had not yet begun operational testing of the project, it is too early to tell the extent to which these efforts will help CBP to assess wait times and meet the new statutory airport wait time reporting requirements. As an agency that has an important role in implementing the National Travel and Tourism Strategy to attract and welcome international visitors to the United States, CBP’s ability to provide useful wait time data that allows travelers to plan their flights to U.S. international airports is essential to enhancing their travel experience to the United States. Long wait times may result in travelers missing connecting flights or having negative experiences of traveling to U.S. international airports. CBP reports wait times on a public website to help travelers estimate possible wait times when planning their next flight, including scheduling a connecting flight. However, the data has limited usefulness to visitors because CBP reports wait times for all categories of travelers combined. Given the differences in wait times between, for example, U.S. citizens and visitors, reporting wait times for different categories of travelers could improve the usefulness of CBP’s wait time data by providing travelers with more complete and accurate data on their wait times to help inform their flight plans, including scheduling connecting flights. It could also better position CBP to be able to determine if it is meeting its airport travel facilitation goals. To improve the usefulness of airport wait time data that CBP currently reports on its public website, we recommend that the Secretary of Homeland Security direct the Commissioner of U.S. Customs and Border Protection to report airport wait time data for different categories of travelers. We provided a draft of this report to DHS for its review and comment. DHS provided written comments, which are noted below and reproduced in full in appendix V, and technical comments, which we incorporated as appropriate. DHS concurred with our recommendation regarding reporting wait time data for different categories of travelers and described the actions it plans to take in response. Specifically, DHS stated that CBP’s Office of Field Operations will enhance the Real Time Wait Time Reporting Tool to improve CBP’s ability to report timely and accurate wait time data in a usable format to include different passenger categories. If implemented effectively, these planned actions should address the intent of our recommendation. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. The objectives of this report were to examine (1) how U.S. Customs and Border Protection (CBP) and stakeholders implemented airport travel and tourism facilitation initiatives at U.S. international airports; (2) how CBP and stakeholders manage staff to facilitate the traveler entry process at U.S. international airports; and (3) the extent to which CBP has mechanisms to monitor and report wait times at U.S. international airports. To address the first objective, we identified CBP’s airport travel and tourism facilitation initiatives from the Model Ports program since 2007. These initiatives include Automated Passport Control (APC) kiosks, baggage first, diplomatic arrival processes and diplomatic processing lanes, electronic signage and multimedia, enhanced queueing, Express Connection, Global Entry kiosks, International to International baggage program, Mobile Passport Control (MPC), modified egress, One Stop, Professionalism Service Manager (PSM) program, Reimbursable Services Program, stakeholder meetings, and Variable Message Signage. We collected and analyzed information on the implementation of these initiatives at the 17 busiest U.S. international airports as of the end of fiscal year 2016, according to CBP. To examine how CBP and airport and airline stakeholders implemented these initiatives from 2007 through 2016, we reviewed CBP reports, including the Model Ports Program Report to Congress in 2010 and the Department of Commerce and the Department of Homeland Security’s (DHS) 2015 report to the President that defines a national goal to “provide a best-in-class arrivals experience.” We also reviewed CBP’s most recent version of its Airport Technical Design Standard; business requirements for APC kiosks; internal assessments and reports on initiatives such as Global Entry and MPC; and internal memorandums from the Model Ports program which directed officials at airports to test initiatives such as enhanced queueing and diplomatic processing lanes. To examine how CBP obtains feedback on the traveler experience, we reviewed CBP’s reports on its performance goals and measures, including its Traveler Satisfaction Survey Reports for the surveys it conducted in 2012, 2015, and 2016. We also reviewed CBP’s standard operating procedures for the Complaint/Compliment Management System and the directive that established policy and responsibilities of the PSM program. We interviewed CBP officials at headquarters, officials from eight travel and tourism industry associations selected based on the nature of the associations and suggestions by CBP and association officials, and the National Treasury Employees Union, the labor union representing CBP officers, to gain insights on initiatives. As shown in table 4, to obtain the perspectives of local CBP officials and stakeholders on the implementation of initiatives, we collected information and interviewed CBP officials and airport and airline representatives at 15 of the 17 airports and conducted site visits at 11 of these airports to observe airport operations. We obtained perspectives from airport authorities, airlines, terminal operators, and Office of Field Operations (OFO) officials at the 15 selected airports—including Port Directors or Acting Port Directors, Assistant Port Directors, and Professionalism Service Managers, among others—on how CBP and stakeholders have implemented initiatives to facilitate the international arrivals process to the United States and factors that affect the implementation of initiatives. At the 11 site visits, we observed OFO officers conducting inspections of international travelers and received demonstrations on how airports employ technology initiatives, such as APC and MPC, and viewed multimedia and signage, among other activities. We selected a non-probability sample based on traveler volume, traveler wait times, technology employment, and geographic diversity. We selected airports with the highest traveler volume, longest wait times, and most technology employment as well as the lowest traveler volume, shortest wait times, and least technology employment to provide a range of traveler experiences at the 17 busiest U.S. international airports. We considered traveler volume because, as we have previously reported, traveler volume is one of three key factors that affect traveler wait time. We considered wait time because it has a role in the experience of travelers arriving at U.S. international airports, according to CBP’s Traveler Satisfaction Surveys. We considered the extent to which airports have employed technology, including APC kiosks and MPC, because these initiatives can impact the wait times and experiences of travelers arriving at U.S. international airports. We considered geographic diversity to study a full spectrum of issues that impact airports, including security risk factors based on the origin of arriving flights, among others. The information we collected from these site visits cannot be generalized to all U.S. international airports. However, because we selected these airports based on a variety of factors, they provided us with a diversity of insights about the experience of international travelers arriving at the 17 busiest U.S. international airports. To address the second objective, we identified CBP’s Workload Staffing Model (WSM) and various tools and strategies that CBP uses to manage its staff nationally and locally. To examine how CBP determines its staffing needs for officers at the ports of entry (POE) with the 17 busiest U.S. international airports, we reviewed CBP’s WSM calculations; additional staffing calculations (add-ons) completed by officials at CBP headquarters that the WSM cannot calculate, including forthcoming implementation of initiatives or new facilities; and the authorized staffing level for fiscal years 2014, 2015, and 2016. We assessed the reliability of these data by (1) performing electronic testing for obvious errors in accuracy and completeness, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of describing the staffing process. We reviewed CBP’s internal statement of policy and intent for the use of the WSM; Resource Optimization Strategy and subsequent reports to Congress; and three reviews of the WSM, including a review by a government consulting firm in 2010, an internal review by a DHS Program Analysis and Evaluation team in 2012, and a DHS Office of Inspector General review in 2014. However, we did not conduct an evaluation of the WSM to determine its usefulness or accuracy as an officer staffing allocation tool during this review. To assess how CBP manages its available staff, we reviewed CBP’s total overtime expenditures for the POEs with the 17 busiest U.S. international airports for fiscal years 2013, 2014, 2015, and 2016 and reviewed CBP’s internal documentation. This included Time and Motion studies CBP conducted with a private contractor in 2014 and 2015 for each of the 17 busiest U.S. international airports, summer peak travel staffing plans for each of the 17 airports, local airport staffing rosters, Reimbursable Services Program fee agreements and weekly usage reports, and national and local collective bargaining agreements. We assessed the reliability of CBP data on funding spent on overtime and reimbursable service agreements, if applicable, at the 17 busiest U.S. international airports from fiscal years 2014 through 2016 by (1) performing electronic testing for obvious errors in accuracy and completeness, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. We collected information and interviewed CBP officials and airport and airline representatives at the 15 selected airports and conducted site visits at 11 airports to gain a better understanding of the various factors that affect staffing and how CBP and stakeholders manage staff. We interviewed CBP officials and airport and airline representatives at these airports, as well as CBP officials at headquarters, officials from eight travel and tourism industry associations, and the labor union representing CBP officers, to gain insights on how CBP manages staffing nationally and locally airports, and to gain insights on staffing challenges. To address the third objective, we reviewed CBP’s process for collecting, monitoring, and reporting airport wait time data. We collected and analyzed CBP airport wait time data for the 17 airports from May 2013 through August 2016. We assessed the reliability of these data by (1) performing electronic testing for obvious errors in accuracy and completeness, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. We reviewed CBP internal documents, including Time and Motion studies CBP conducted with a private contractor in 2014 and 2015 for each of the 17 busiest U.S. international airports; standard daily, monthly, quarterly, and annual airport wait time and volume reports that officials at CBP headquarters use to monitor trends in airport wait times; after-action reports from Operational Review Teams, also referred to as “jump teams,” sent to airports experiencing excessive wait times; and an internal memorandum on passenger wait time mitigation strategies for the summer travel season. We reviewed CBP’s wait time calculation method, including its algorithm for automatically excluding refugees, asylum seekers, and three percent of travelers with the longest wait times from each flight, as well as CBP’s manual process for excluding additional travelers from its wait time calculations that it reports on its public website. We reviewed legislation requiring CBP to publish its airport wait time information on its public website and CBP’s annual reports to Congress on airport wait times in response to legislative requirements. We also collected information and interviewed CBP officials and airport and airline representatives at the 15 airports selected and conducted site visits at 11 airports to gain a better understanding of the various factors that affect CBP airport wait times and interviewed CBP officials and airport and airline representatives at these airports. We also interviewed DHS Science and Technology Directorate (S&T) officials, CBP officials at headquarters, officials from eight travel and tourism industry associations, and the labor union representing CBP officers to gain insights on wait time calculations and reporting. We compared this information against CBP performance goals and Standards for Internal Control in the Federal Government. We conducted this performance audit from February 2016 to March 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix presents images of select travel and tourism facilitation initiatives at the 17 busiest U.S. international airports. Figures 16 to 30 illustrate U.S. Customs and Border Protection (CBP) travel and tourism facilitation initiatives. Figures 31 to 34 illustrate stakeholder travel and tourism facilitation initiatives. Figures 35 to 40 illustrate examples of variation in CBP and stakeholder initiatives across airports or across terminals at an airport. Figures 41 to 48 illustrate additional examples of initiatives at airports. In the following tables, we identify the extent to which U.S. Customs and Border Protection (CBP) and stakeholders have implemented travel and tourism facilitation initiatives at the 17 busiest U.S. international airports, as of September 30, 2016. CBP’s travel and tourism facilitation initiatives at U.S international airports include Automated Passport Control (APC), baggage first, dedicated diplomatic processing lanes, electronic signage and multimedia, enhanced queueing, Express Connection, Global Entry, International to International baggage program, Mobile Passport Control (MPC), modified egress, One Stop, Professionalism Service Managers (PSM), Reimbursable Services Program fee agreements, stakeholder meetings, and Variable Message Signage. In addition, airports have implemented additional travel and tourism facilitation initiatives at U.S. international airports including color-coded queueing with retractable belts and signage, dedicated crew lanes, and electronic wait time monitoring systems in primary inspection or exit control areas. This appendix provides additional information on the average airport wait times for foreign visitors and U.S. citizens at the 17 busiest U.S. international airports from May 2013 through August 2016. In the following tables, we report U.S. Customs and Border Protection (CBP) airport wait time data for visitors and U.S. citizens at the 17 busiest U.S. international airports from May 2013 through August 2016. In addition to the contact named above, Kirk Kiester (Assistant Director), Luis E. Rodriguez (Analyst-in-Charge), Dominick Dale, Michele Fejfar, Timothy Guinane, Eric Hauswirth, Stephanie Heiken, Susan Hsu, James McCully, Sasan J. “Jon” Najmi, and Minette Richardson, made significant contributions to this report.
Over 326,000 passengers and crew entered the United States through 241 international airports on an average day in fiscal year 2016, according to CBP. In 2007, CBP started its Model Ports program to improve the international arrivals process for travelers to the United States by implementing technology to facilitate entry and expanding public-private partnerships, among other things. GAO was asked to review this program and subsequent airport travel and tourism facilitation initiatives at the 17 busiest U.S. international airports associated with the president's National Travel and Tourism Strategy. This report examines (1) how CBP and stakeholders have implemented airport travel and tourism facilitation initiatives, (2) how CBP and stakeholders manage staff to facilitate the traveler entry process, and (3) the extent to which CBP has mechanisms to monitor and report wait times at U.S. international airports. GAO collected data on the implementation of travel and tourism facilitation initiatives and analyzed CBP officer staffing and wait time data at the 17 airports from fiscal years 2014 through 2016. GAO also visited a nongeneralizable sample of 11 airports, selected based on traveler volume and variety of implemented initiatives, among other factors, and interviewed CBP, airport, and airline officials at 15 of the 17 airports. U.S. Customs and Border Protection (CBP), within the Department of Homeland Security, and airport and airline stakeholders jointly implement travel and tourism initiatives at U.S. international airports to facilitate the arrival of travelers. These initiatives include Automated Passport Control self-service kiosks that allow eligible travelers to complete a portion of the CBP inspection process before seeing a CBP officer, and Mobile Passport Control that allows eligible travelers to submit their passport and other information to CBP via an application on a mobile device. Various airport-specific factors can affect whether and how CBP and stakeholders implement travel and tourism facilitation initiatives at each airport. These factors include the size and layout of the airport facility, the infrastructure needed to support initiatives, the willingness and ability of the airport stakeholders to pay for initiatives or infrastructure to support them, as applicable, and stakeholder discretion in how to implement initiatives. CBP has two airport travel facilitation goals: (1) improving customer service levels for international arrivals and (2) maintaining or reducing wait times—and has implemented mechanisms to assess and obtain feedback on the traveler experience. CBP allocates and manages staff using various tools, and stakeholders provide resources to help facilitate the traveler entry process. For example, CBP uses its Workload Staffing Model to determine the staffing requirements and help make allocation decisions for CBP officers at ports of entry, including airports. CBP also uses its Enterprise Management Information System to monitor and make immediate staffing changes to meet any traveler volume and wait time concerns at airports. Airport and airline stakeholders can also enter into agreements to pay for CBP officers to work overtime during peak travel hours or outside regular operational hours. CBP monitors airport wait times and reports data on its public website to help travelers plan flights, including scheduling connecting flights, but the reported data have limited usefulness to travelers. Currently, CBP does not report wait times by traveler type, such as U.S. citizen or foreign visitor. Rather, CBP reports average hourly wait times for all travelers on arriving international flights. By reporting wait times for all categories of travelers combined, CBP is reporting wait times that are lower than those generally experienced by visitors. According to GAO's analysis of CBP wait time data for the 17 busiest airports from May 2013 through August 2016, the average wait time was 13 minutes for U.S. citizens and 28 minutes for visitors, while the combined reported average wait time was 21 minutes. Reporting wait times by traveler type could improve the usefulness of CBP's wait time data to travelers by providing them with more complete and accurate data on their wait times. This could help inform their flight plans and could provide additional transparency to allow CBP to work with stakeholders to determine what, if any, changes are needed, to improve the traveler experience and better manage wait times. This is a public version of a For Official Use Only—Law Enforcement Sensitive report that GAO issued in February 2017. Information DHS deemed For Official Use Only—Law Enforcement Sensitive has been redacted. GAO recommends that CBP report airport wait time data for different categories of travelers. CBP concurred with the recommendation and identified planned actions to address the recommendation.
USPS is required by law to provide prompt, reliable, and efficient services to patrons in all areas, a standard known as universal service. In meeting this standard, USPS is required to operate as a self-sufficient, independent establishment of the executive branch. USPS receives no annual appropriations for purposes other than revenue forgone on free and reduced rate mail. USPS generates revenue through the sale of postage and postal-related products and services. USPS has acknowledged that its operating costs must be cut, including by reducing the number of postal-operated retail facilities. Visits to post offices have decreased, with USPS reporting about 59 million fewer visits to post offices in 2010 than in 2009 and an overall decline in post office visits of about 21 percent over the last decade. However, it has been difficult for USPS to close post offices because of statutory restrictions on closing small post offices solely for operating at a deficit and resistance from employees, affected communities, and Members of Congress concerned about possible effects on service, employees, and communities. USPS officials have also said that the amount of time it takes to complete the statutory process for closing its facilities has hindered USPS from timely realignment of its retail network. Expanding retail alternatives is part of USPS’s overall strategy to return to financial solvency while continuing to meet its universal service requirements. These alternatives have the potential to provide postal services at a lower cost to USPS than post offices, since USPS does not staff or maintain retail partners’ facilities, and self-service options reduce the need for labor and facilities. In 2010, USPS reported that providing access through certain types of retail alternatives costs less in proportion to the revenue these alternatives generate, an estimate USPS officials referred to as cost per revenue dollar. The following retail alternatives are the focus of USPS’s efforts to expand access. USPS Web site (usps.com). The Click-N-Ship section of USPS’s Web site allows a customer to print domestic and international shipping labels using a computer, and the site’s Postal Store offers stamps and collectable memorabilia (see fig. 1). Customers may also use the site to complete other tasks, such as informing USPS of a changed address or tracking their shipments. USPS self-service kiosks. USPS owns and operates about 2,500 kiosks, also known as automated postal centers, that allow customers to buy stamps and mail letters and packages in a self-service environment. Each kiosk consists of a touch-screen computer with a scale and is generally located in a post office lobby, with many allowing for 24-hour customer access (see fig. 2). Customers can Customers can make purchases at kiosks using debit or credit cards. make purchases at kiosks using debit or credit cards. Contract postal units (CPU). CPUs are the retail alternative most comparable to post offices. They generally provide a broad range of retail products and services to customers at USPS prices. Like post offices, CPUs do not offer competitors’ shipping products and services. CPUs are operated and managed by independent retailers that USPS contracts with, providing them with signage and the rights to use the USPS logo. A CPU may be a stand-alone business or occupy space within a larger business, such as a counter within a store (e.g., pharmacy or grocery store) that also sells other products and services (see fig. 3). According to USPS, there were about 3,600 CPUs as of fiscal year 2010. Approved Shippers. Approved Shippers are retailers that may offer shipping services from a range of providers, including USPS. For example, Approved Shippers we met with offered shipping services from companies such as FedEx and UPS, as well as local delivery companies. USPS provides no compensation to the retailers, but provides its services at discounted commercial rates and puts no restrictions on additional fees that retailers can charge for USPS products and services. Vendors that participate in the Approved Shipper program are provided USPS branding rights and signage. According to USPS, as of fiscal year 2010, there were about 4,200 Approved Shippers. Stamp retailers. USPS’s Stamps on Consignment program, managed by ABnote North America (ABnote), a company specializing in secure distribution and order fulfillment, makes stamps available at retailers such as grocery stores and pharmacies and at banks’ automated teller machines. USPS generally provides no compensation for stamp retailers, and retailers cannot sell stamps above face value; however, banks that sell stamps through automated teller machines may charge customers a fee for this service, and other stores that are not Stamps on Consignment program participants may also resell stamps and charge additional fees. According to USPS, as of fiscal year 2010, there were more than 56,000 Stamps on Consignment locations selling stamps. The types of USPS products and services available at post offices ices available at post offices compared with those available at retail alternatives are shown in figure 4. compared with those available at retail alternatives are shown in figure 4. Retail alternatives are available in urban, suburban, and rural areas, supplementing USPS’s traditional retail network of post offices, as illustrated in figure 5. USPS employees in headquarters and field offices have roles in implementing retail alternatives. Headquarters officials are responsible for designing and overseeing the retail alternatives programs, including setting goals, developing marketing campaigns, managing usps.com, and developing policies for local officials that oversee kiosks and retail partners. They also maintain databases on retail revenue and facilities. Officials in administrative field offices, particularly district offices, and post offices supervise postmasters and other managers who oversee and support local implementation of retail alternatives, in addition to their duties supporting the mail delivery network. These oversight duties include monitoring and servicing kiosks and training and monitoring retail partners. The Government Performance and Results Act of 1993 (GPRA) requires USPS to establish outcome-related performance goals for its major functions. GPRA also requires USPS, as it does other federal agencies, to develop performance indicators for measuring the relevant outcomes of each program activity to demonstrate how well it is achieving its goals. We have previously reported that performance data should be complete, accurate, valid, timely, and easily accessible to be useful. Furthermore, we have reported on the importance of reliable cost data, noting that it can help provide accurate comparisons of costs and benefits; inform budgets and proposals for reorganization; identify potential savings, efficiencies, and waste; benchmark programs and activities; and measure program and managers’ performance. USPS’s efforts to expand access through retail alternatives are intended to support its strategic goals of improving service and financial performance. According to USPS’s 2010 Comprehensive Statement on Postal Operations, retail alternatives improve service by making postal products and services available at times and places consistent with customer preferences, and they improve financial performance by increasing revenue and providing services at a lower cost than traditional outlets. USPS intends for both types of retail alternatives—self-service and partnership programs—to make postal products and services more convenient by expanding the locations and times at which they are available. For instance, USPS has expanded the number of access points in the following ways: Providing customers the option to obtain postal services through its Web site, usps.com. According to USPS, use of the site has increased from about 312 million visits in 2006 to more than 413 million visits in 2010, showing that customers are increasingly accessing postal services online. Deploying 2,500 self-service kiosks in 2004 to selected post offices, to provide an alternative to post office windows that enables customers to conduct most types of common postal transactions. Partnering with retailers to sell stamps in over 56,000 locations, such as pharmacies or grocery stores. Expanding its Approved Shipper program in 2010 to about 4,200 participants, including by offering postal services in about 1,000 Office Depot stores. According to USPS officials, the number of stores increased to about 1,100 stores in 2011. Placing CPUs in rural areas that need service but would not generate enough revenue to justify the cost of operating post offices in those locations. According to USPS, retail alternatives improve service by providing access to customers not only in more places but also for longer hours. For example, usps.com is available 24 hours a day, and self-service kiosks are accessible when post office windows are closed. Additionally, USPS states that service at busy post offices can also be improved when kiosks in post office lobbies or CPUs in nearby areas are available to accommodate customers who could otherwise have long waits in post office lines. To further improve service through usps.com, USPS is currently redesigning its Web site with the intention of making it more useful for customers (see our assessment of USPS’s Web site redesign in app. II). Although retail alternatives expand service locations and hours, certain characteristics of these alternatives could be problematic for some postal customers. For example, USPS says that usps.com provides postal products and services “when and where” customers want them, but this option is only available to postal customers with access to the Internet. We identified the following characteristics that may affect customers’ ability to access postal service at retail alternatives: Cost. Postal services available through retail alternatives may be more costly for customers than at post offices because of added fees or limited options. For some customers, the convenience certain options offer may outweigh the added cost, but to more price-sensitive customers, higher costs could deter the use of some alternatives. Instances in which alternatives may be more costly include the following: USPS charges a $1.00 service fee for stamps purchased from usps.com or over the phone, and banks that sell stamps through automated teller machines are allowed under their service agreements to charge customers a service fee. The usps.com Click-N-Ship site offers Express Mail, which provides overnight or second-day delivery, and Priority Mail, which provides 1- to 3-day delivery, but not the less-expensive First-Class, Parcel Plus, and Media Mail shipping options. USPS puts no restrictions on added fees Approved Shippers may charge for the same services available at post offices. Furthermore, many retailers that resell USPS stamps, including some Approved Shippers, may charge customers other than face value for them. For example, during a site visit, we observed an Approved Shipper charging $11.00 for a book of 20 stamps valued at $8.80. Access requirements. Self-service options give customers the flexibility of accessing postal services directly. However, these options have additional requirements for a customer to be able to use them: Both usps.com and self-service kiosks require an electronic form of payment, such as a credit card. This creates a barrier for customers who do not have access to credit cards, which could disproportionately affect low-income customers. Internet access is required to use usps.com, putting customers who lack access to or do not use the Internet at a disadvantage, particularly those in remote areas. Closure and termination procedures. Retail partners, unlike postal- operated facilities, can close without public notice or an opportunity for public input, creating the potential for unanticipated gaps in service: CPU contracts are generally valid for an indefinite period, but either the CPU operator or USPS can terminate a contract at any time, with or without notice, depending on the circumstances. According to USPS, the number of CPUs has declined in recent years, from about 5,800 in 2006 to about 3,600 in 2010. USPS officials and representatives of a national association of CPUs said that the main reason for the decline in CPUs has been the recent downturn in the U.S. economy. Furthermore, USPS officials told us they have faced challenges with expanding the number of CPUs, including resistance from postal labor and higher costs than implementing other types of retail alternatives. In order to minimize the potential for closures, USPS officials told us they assess potential CPU operators to determine whether their existing business is viable. USPS or Approved Shippers can terminate an Approved Shipper agreement with 10 days’ notice to the other party. ABnote’s agreement with stamp retailers states that either ABnote or the retailer may terminate the agreement with 30 days written notice to the other party and that ABnote may terminate it immediately. USPS is required to provide universal service to customers throughout the United States, which includes providing access to its retail services. The Postal Reorganization Act of 1970 mandates that USPS provide “prompt, reliable, and efficient services to patrons in all areas and shall render postal service to all communities” as well as “…a maximum degree of effective and regular postal services to rural areas, communities, and small towns where post offices are not self- sustaining.” According to USPS, there are a number of dimensions to providing universal service, encompassing issues such as uniform prices and affordability, quality of service, access to services and facilities, and geographic scope—many of which are particularly applicable to providing retail services. However, USPS has not adopted a specific standard for universal service and has declined to create one. This makes it difficult to determine to what extent the differences in cost, access requirements, and closure procedures previously discussed could affect USPS’s ability to meet its universal service mandate as it modernizes its retail network to rely more on retail alternatives and less on post offices. Such ambiguity will add to the challenge of defining an appropriate level of access under a modernized network, such as determining the optimal mix of retail alternatives and post offices USPS needs to effectively serve customers from varying socioeconomic and population demographics. Other countries that have modernized their postal retail networks to include more partner-owned and -operated facilities developed criteria for providing a minimum level of service to guide their restructuring efforts. Such standards can include different requirements for serving areas of higher and lower population density. For example, in our recent report on foreign posts’ efforts to restructure their networks, we noted that Australia’s universal service standards require that at least 90 percent of residences in metropolitan areas be located within 2.5 km (1.56 miles) of a postal retail outlet and, in nonmetropolitan areas, at least 85 percent of residences be located within 7.5 km (4.66 miles) of a retail outlet. Canada and Germany also set standards for determining appropriate geographic coverage of retail access points. Having criteria for assessing whether changes to its retail network conform with its requirements to provide universal service could help USPS determine the most cost- effective placement of retail access points—whether through post offices or alternatives. Such criteria could change over time to adapt to changing customer needs. They could also help USPS more clearly articulate how it intends to achieve its goals and better demonstrate its progress toward them. Without such measures, it is unclear how well USPS’s efforts to expand access with retail alternatives are supporting its goal to improve service as intended. USPS has stated that retail alternatives support its goal to improve financial performance by generating revenue while offering products and services through outlets that are less costly than post offices. USPS officials told us that the increasing proportion of retail revenue from alternatives is a marker of improved financial performance, even though retail revenue from all sources—which constitutes about one-fourth of USPS’s overall revenue—decreased from $19.2 billion in fiscal year 2006 to $17.5 billion in fiscal year 2010. USPS data show that the share of retail revenue from alternatives grew from about 22 percent in fiscal year 2006 to about 31 percent in fiscal year 2010, representing an increase from about $4.3 billion to $5.4 billion during this period, while at the same time, revenue from post office windows decreased from $14.9 billion to $12.1 billion (see fig. 6). According to USPS, revenue from retail alternatives in fiscal year 2011 represented 35 percent of overall retail revenue. USPS has projected that by 2020 alternatives to post offices will likely account for 60 percent of its retail revenue. tail revenue. Although overall postal revenue from alternative sources has grown in recent years, trends in revenue vary among the types of retail alternatives. From fiscal years 2006 through 2010, revenue from some types of retail alternatives has increased: According to USPS, revenue from usps.com has grown from about $370 million in fiscal year 2006 to about $640 million in fiscal year 2010. Revenue from self-service kiosks has also grown, increasing from about $410 million in fiscal year 2006 to about $580 million in fiscal year 2010. Revenue from Approved Shippers, a comparatively small program, grew from about $12 million in fiscal year 2006 to about $29 million in fiscal year 2010, according to USPS. However, revenue from other types of retail partners has decreased: Revenue from CPUs declined from about $730 million in fiscal year 2007 to about $625 million in fiscal year 2010. Revenue from stamp retailers declined in fiscal year 2010 to about $1.1 billion, after having grown from about $1.0 billion in fiscal year 2006 to about $1.2 billion in fiscal year 2009. According to USPS officials, retail alternatives also contribute to USPS’s financial performance by providing access to its products and services at a lower cost than through post offices. In its March 2010 action plan, USPS presented estimates showing that, for each dollar in sales generated, costs were higher for post offices than for retail alternatives. Specifically, using fiscal year 2008 data, USPS contractors estimated that USPS incurred $0.23 to $0.39 in cost for each dollar in sales at post offices, while for retail alternatives, USPS estimated its costs per dollar of revenue ranged from $0.02 to $0.13. In assessing the success of retail alternatives in providing cost savings, USPS officials repeatedly pointed to the cost per revenue dollar estimates; however, these estimates do not represent actual cost savings, and we identified several limitations with these estimates as a measure of financial performance: The 2008 cost-per-revenue-dollar estimates are a snapshot of costs and not a model that projects future costs as inputs or external conditions change. For example, in estimating the costs of local oversight of retail partners, USPS assumed that the existing retail and delivery network would remain intact and did not account for potential closures or staffing changes in post offices. The estimates do not take into account how additional retail alternatives could increase or decrease the per-unit cost of retail alternatives. For example, as USPS adds more retail alternatives, such as CPUs or Approved Shipper locations, the cost of providing additional oversight may be comparatively less, since USPS has already invested resources necessary for training postmasters or managers to oversee the initial units. Moreover, the declining demand for postal products and services could also significantly change the cost per revenue dollar estimates. If revenue declines, as has been the case for some retail alternatives, absent reductions in costs, the cost per revenue dollar would increase. More recent expenses, such as for redesigning USPS’s Web site or reviewing the performance of Approved Shippers, were not factored into the 2008 cost estimates and could conceivably change the estimates. Consequently, the 2008 cost-per-revenue-dollar estimates do not provide a complete picture of cost savings realized or expected from implementing retail alternatives. According to USPS, the estimates are being updated with fiscal year 2009 and 2010 data using a similar methodology and include changes intended to improve their accuracy. As of October 2011, USPS had not completed this update. USPS officials said they were unable to provide any details about actual cost savings resulting from their efforts to expand retail alternatives because wider adoption of retail alternatives is needed before USPS can realize cost savings by reducing staff at specific post offices. Officials responsible for oversight of retail partnerships told us that although they track the cost of their programs, they have not determined metrics for identifying cost savings, and need better cost data analysis to make effective program decisions. Furthermore, USPS officials told us their cost data systems were designed more for providing information in determining pricing of postal products than for analyzing costs of specific program areas. USPS did, however, estimate in 2003 that implementing self-service kiosks would save an average of $110 million annually in labor costs from fiscal year 2005 through fiscal year 2009, but USPS has not assessed whether such cost savings were achieved. USPS officials in headquarters and field offices told us that they can track changes in staffing at post offices that can result in cost savings, but cannot determine whether such changes are the result of customer shifts to retail alternatives or declines in demand for other reasons. USPS has stated that it will realize cost savings as it closes redundant and underutilized post offices in response to decreased demand and customers shifting to retail alternatives. USPS announced in September 2011, that it will review as many as 15,000 post offices for possible closure, which it stated could produce annual savings of $1.5 billion as part of an effort to eliminate $20 billion in annual costs by 2015. As part of USPS’s review begun in July 2011, USPS said that post offices that could potentially close are those that have insufficient demand and available alternative access. Also, in July 2011, USPS launched a new retail partnership initiative, the Village Post Office, which directly ties retail alternatives to USPS’s ability to cut costs. USPS described the Village Post Office as a replacement option for some communities where underutilized yet costly post offices may close. With Village Post Office, USPS intends to partner with existing small businesses to provide a limited array of postal products and services to the local community, including mail collection boxes, post office boxes, stamps, and flat-rate shipping and mailing products. USPS launched its first Village Post Office in the town of Malone, Washington, in the summer of 2011, and USPS officials said they expect to open several thousand similar outlets by the summer of 2012. Our past work has shown that replacing postal-owned and -operated facilities with privately owned and operated facilities is a strategy some foreign posts, such as those in Australia, Germany, Finland, and Sweden, have used to restructure their retail networks in order to contain facility and labor costs. USPS officials we spoke with recognized, however, that until post office closures actually occur, efforts to expand retail alternatives will yield no cost savings and in fact could increase costs, since, although the alternatives are generally less costly, USPS still incurs start-up, administration, and oversight costs. Given USPS’s financial challenges, a clear plan guiding investments in its retail network is essential, including how it intends to increase access through retail alternatives while considering cuts to its network of post offices. USPS has not yet produced a plan outlining how retail alternatives, as part of USPS’s overall retail network, will improve service and financial performance. USPS released a plan to Congress in June 2008 outlining changes to its processing, transportation, and retail networks that included descriptions of alternatives it was pursuing, but did not include specific goals for expanding access through alternatives or specific related actions it would take to achieve cost savings. We reported in February 2011 that USPS officials said they were developing a retail strategy that would be made public in early 2011. However, as of October 2011, USPS officials told us they had not prepared a documented strategy for retail. According to an official responsible for USPS retail programs, such a plan has not been completed because the needs of postal customers continue to evolve. Members of Congress have introduced legislation calling for USPS to develop a plan that addresses customer access, the closure and consolidation of post offices, and estimated cost savings attributable to such closures and consolidations. Furthermore, GPRA identifies how such a plan should look, what it should include, and how it would help USPS measure progress toward its goals. As USPS continues to make changes to its retail network, an ongoing focus on public communication will be important to foster customers’ acceptance of retail alternatives. USPS’s Corporate Communications officials told us that building awareness of retail alternatives among the public has been a particular challenge. We have previously reported that agencies need to ensure they adequately communicate with external stakeholders, such as consumers, whose actions have a significant impact on an agency achieving its goals. To accomplish this, agencies develop communication strategies, which can include actions for building awareness and support for a program. Determining whether such communication is adequate can include assessing whether the agency’s message is reaching the intended audience, which is particularly important when the agency is trying to reach specific populations, such as those in rural areas or with low incomes. Without feedback from such groups, the agency cannot know whether its communication strategies are building the awareness and support needed to achieve its goals. USPS has included actions to inform the public about retail alternatives in its retail communications strategies and has recently launched communications efforts aimed specifically at increasing awareness of retail alternatives. In developing its public communications, USPS conducted focus groups to identify messages that would resonate with customers. USPS’s actions under these strategies include the following: As previously discussed, in July 2011, USPS launched the Village Post Office initiative to partner with small businesses to offer a limited array of postal products and services in areas where USPS may close underutilized post offices. In May 2011, USPS launched a communications campaign to promote the availability of its services at post offices and retail alternatives. This campaign uses the slogan “we’re everywhere so you can be anywhere” to communicate the availability of USPS products and services at locations other than post offices. Also in May 2011, USPS released an initial version of a new online tool customers can use to find post offices and retail partners, including Approved Shippers, which have generally received less promotion from USPS than other alternatives. USPS planned to release additional improvements to its online locator as part of the usps.com redesign. In 2010, to improve the public’s awareness of USPS services at multiple retail points, USPS created a set of icons that post offices and retail partners can display to show specific products and services available at a particular location (see fig. 7). A full-service post office would have most or all of these products and services, while retail partners like CPUs and Approved Shippers, would have fewer, such as only stamps, mailing, and shipping options. In May 2011, USPS developed its plan for communicating the redesign of usps.com. This plan focuses on the benefits customers may expect from the new features that will be deployed with each of the project’s phases. The plan called for USPS to use multiple methods to communicate with external stakeholders, including the public, such as press releases, e-mail to its business partners, and external publications. Key goals of the plan were to ensure that the Web site’s current customers were aware of the launch of the new site and to help them understand the benefits of the transition and work to build interest among other customers for the new site. As previously discussed, some customers may be more likely to obtain postal services at post offices than through retail alternatives, because of certain characteristics that may make alternatives problematic for them. We have previously reported that USPS needs to clearly communicate to the public how its plans to optimize its retail network will affect customers, particularly those in rural areas. We continue to believe this is important, especially in light of the changes that USPS is undertaking to close post offices and replace some of them with retail partners, a process that includes consideration of whether sufficient retail alternatives are available. According to a senior USPS official responsible for advertising, USPS used to measure changes in public awareness by conducting surveys before and after major advertising campaigns, but is no longer doing so. Although financial constraints may preclude USPS from introducing new customer feedback tools, it could potentially use existing tools to gauge public awareness of retail alternatives. USPS currently surveys residential and business customers about their experiences with USPS products and services, including their views on the convenience and customer service of post offices. The household diary study USPS conducts annually to assess customers’ use of the mail is another such tool. Either of these options could serve as a platform for obtaining feedback on how aware customers are of retail alternatives and whether they are meeting customers’ needs. Feedback about customers’ awareness and use of retail alternatives can help USPS be sure its message is reaching its intended audience. For USPS, as for the foreign posts we reviewed recently, having an effective communications strategy is an important way to mitigate resistance to modernizing a postal retail network. For example, Sweden’s postal service, Posten AB, developed a comprehensive public communications campaign to inform its stakeholders of how it was transforming its retail network, an effort begun in 2001. This campaign was intended to help change the perception of “the post as a place” to “the post as a service.” Posten AB officials told us they made efforts to show customers and other stakeholders that although retail facilities owned and operated by the post were closing, the new retail partnerships offered more access points and made postal products and services more convenient to obtain. Swedish postal officials told us that the public was initially resistant to the sweeping changes to the nation’s postal infrastructure, but ultimately accepted them. As of 2009, Sweden has transformed its retail network to be 88 percent owned and operated by retail partners. According to the officials, their public communications campaign was central to the post’s successful transformation of its retail network. Similarly, three of the five other foreign posts we examined maintained a retail network with a majority of partner-owned and -operated facilities rather than their own traditional post offices. That about a third of USPS’s retail revenue now comes from alternatives to post offices shows the public has started to accept and use retail alternatives. As previously discussed, USPS announced this year it is studying several thousand post offices for potential closure and intends to close up to 15,000 post offices by 2015. If USPS closes these post offices as planned, it will be increasingly important for it to effectively encourage widespread adoption of retail alternatives. As USPS has expanded its use of retail partners, it has taken steps to help ensure that such third-party retailers offer products and services in accordance with its procedures. Specifically, USPS has established procedures for entering into written agreements with retail partners, training them, and monitoring their performance. Contracts and Agreements USPS contracts and agreements with retail partners establish terms of service and requirements for providing service: The standard CPU contract we examined specifies that a CPU must offer stamps, domestic and international shipping services, and other special services, such as insurance and confirmation of an item’s delivery, and that the CPU may not offer competitors’ shipping products and services. Approved Shippers sign licensing agreements stating they will comply with USPS requirements for offering postal products and services and follow guidelines on displaying USPS-branded signage and promotional materials. Stamp retailers sign agreements with USPS’s contractor, ABnote, stating they agree to offer stamps at a price no higher than face value and advertise the availability of stamps. USPS has procedures for providing initial and ongoing training and guidance to its retail partners. According to USPS guidance and officials, USPS provides new CPU operators with initial classroom and on-the-job training, which covers topics such as customer service, product knowledge, and equipment use. USPS may provide additional training to CPUs to cover changes in its products and services. Approved Shippers do not generally receive in-person training from USPS, but do receive training materials such as a guide to postal products, and Office Depot employees receive employer-provided training on USPS services. USPS provides additional updates to retail partners through its field offices, and retail partners can call local USPS offices if they have questions. Most CPU operators we met with during our site visits raised no concerns about their training; however, operators at two CPUs told us they would like refresher training. Additionally, one CPU operator in Northern Virginia told us it would be useful to have more communication from USPS about the CPU’s performance. Furthermore, according to a national association of CPUs, some CPUs have had trouble identifying an appropriate point of contact at USPS because of staffing changes. USPS’s guidance for overseeing CPUs states that local USPS officials are required to monitor the performance of CPUs on a quarterly basis through on-site reviews of their operations and compliance with policies. According to USPS officials, similar reviews have also been required of Approved Shippers since fiscal year 2009; however, we do not know what these reviews cover since USPS did not provide us guidance for these reviews that we requested. As we observed during our site visits, reviews of retail partner sites do not always occur quarterly as USPS intends. CPU operators in two of the four districts we visited told us that such reviews were happening either less frequently than required or not at all. Furthermore, none of the Approved Shippers we visited told us they had been visited by USPS staff conducting a quarterly review. Cuts in USPS’s management-level staffing may contribute to lapses in oversight of retail partners. According to USPS headquarters officials, no staff in field offices or post offices are solely responsible for oversight of retail alternatives and personnel with such responsibility must balance their oversight duties with other responsibilities. Reductions in USPS staffing have led to the elimination or consolidation of management roles that provide oversight of retail partners, such as field managers who oversee USPS retail operations in several districts. USPS officials in one district told us that, although they were required to conduct such visits with Approved Shippers, they were not doing so. According to these officials, the quality of monitoring of retail partners had suffered because recent USPS staffing cuts had consolidated administrative duties for responsible post office employees. We have previously reported that a risk-based approach to oversight can help agencies effectively target constrained resources to better address potential problem areas. Such an approach could help USPS identify which retail partners should be monitored more closely and would give managers flexibility to conduct reviews more or less frequently as warranted by available resources and assessments of risks. USPS already collects data on retailers’ revenue, complaints from customers, and results of monitoring reviews, which could be suitable for determining a retailer’s relative level of risk and an appropriate method and frequency of monitoring. USPS could then make better use of its management resources, particularly during a time when such resources are being cut back. Effective monitoring of retail partners is important because it can identify bad operators whose actions could potentially undermine USPS’s efforts to encourage the adoption of retail alternatives. Stakeholder organizations representing USPS postmasters, a major postal union, and consumers expressed concerns about the potential for retail partners to be inadequately trained in how they provide postal services, which could harm USPS’s image, increase the potential for fraud, and frustrate customers. If retailers offering USPS products and services provide inadequate service, customers may be unwilling to adopt retail alternatives for their postal needs, hampering USPS’s efforts to increase their use. For example, during our site visit to USPS’s Houston district, officials told us about a CPU that was poorly managed, leading to customer complaints and decreased revenue. Eventually the CPU improved after its management was replaced. USPS has acknowledged that to improve its financial condition, it needs to make changes to its operations, including modernizing its retail network to more cost-effectively serve customers through the use of retail alternatives. USPS now offers mailing and shipping products through thousands of retailers and stamps through tens of thousands of locations. Further network restructuring must occur quickly if USPS is to address its financial difficulties, and USPS has announced plans to study 15,000 post offices for potential closure by 2015. Expanding access through retail alternatives needs to be an integral part of this effort, but USPS has still not developed a plan to clearly outline its vision for what a modern retail network will look like, including how retail alternatives will help maximize cost savings and preserve customer access to a degree sufficient for meeting USPS’s universal service mandate. A plan could facilitate realizing the cost savings USPS expects to achieve by expanding retail alternatives, but its costs could increase if it expands access without concurrently making cost-saving cuts in its expensive network of 32,000 post offices. Furthermore, effective progress measures and the data to support such measures would help USPS and key stakeholders determine whether efforts to expand access are improving its service and financial performance as intended. No amount of planning or oversight will ensure the success of providing postal services through retail alternatives if the public does not use them. Clear communication about USPS’s plans for providing access to postal services, even if it creates short-term resistance, will more likely create long-term acceptance if the public knows why, where, and how it may access postal services through alternatives, particularly as post offices close. Although USPS has strategies for communicating with the public about retail alternatives, it lacks methods of assessing whether its message is both reaching its intended audience and having the intended effect. USPS expects that, by 2020, retail alternatives will replace post offices as the principal means of access for its retail products and services—an outcome dependent on the public’s growing awareness, acceptance, and use of the alternatives. Since USPS expects to continue expanding its use of third-party retailers to provide postal services, it will need to have the resources necessary for effective monitoring of those third parties to help ensure they follow USPS procedures and provide quality service. On the basis of our site visits, we found that quarterly reviews of such retail partners are not occurring as required. And because the risk associated with any particular retailer may vary, a risk-based oversight approach could help USPS direct its resources toward those retailers that require more oversight. Factors that inform risk could include volume of sales, incidence of customer complaints, or prior performance history. As USPS continues to cut its administrative staffing to address funding shortfalls, efficient use of its streamlined management resources will be increasingly important. We are making the following two recommendations to the Postmaster General: To better ensure that USPS’s efforts to expand access through retail alternatives support its strategic goals to improve its service and financial performance, the Postmaster General should develop and implement a plan with a timeline to guide efforts to modernize USPS’s retail network that addresses both traditional post offices and retail alternatives. This plan should also include: criteria for ensuring the retail network continues to provide adequate access for customers as it is restructured; procedures for obtaining reliable retail revenue and cost data to measure progress and inform future decision making; and a method to assess whether USPS’s communications strategy is effectively reaching customers, particularly those customers in areas where post offices may close. To help ensure CPUs and Approved Shippers provide postal products and services in accordance with USPS policies, while making efficient use of its constrained resources, the Postmaster General should establish procedures to focus monitoring of retail partners on those determined to be at a greater risk of not complying with its requirements and procedures. We provided a draft of this report to USPS for review and comment. In response to our recommendation to develop a plan to guide its retail network modernization, USPS stated that it is developing a comprehensive strategic plan to identify efforts and activities across the organization that align with optimizing the retail network. In response to our recommendation to establish risk-based procedures for monitoring retail partners, USPS agreed to review its current monitoring policies and stated that its review will be incorporated within its strategic planning efforts. USPS’s full comments are reprinted in appendix III. We are sending copies of this report to the appropriate congressional committees, the Postmaster General, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions on this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and key contributors to the report are listed in appendix IV. This report discusses (1) how the U.S. Postal Service’s (USPS) efforts to expand access through retail alternatives support its strategic goals of improving service and financial performance, (2) how USPS communicates with customers about the availability of its products and services at retail alternatives, and (3) what actions USPS has taken to oversee third parties that provide postal products and services. To obtain information for all of our objectives, we reviewed USPS documents and interviewed USPS officials responsible for implementing efforts to expand retail alternatives. We reviewed internal control standards and our prior reports to identify appropriate criteria for assessing aspects of how USPS manages its efforts to provide access through retail alternatives. We also visited four USPS districts to see how retail alternatives are being implemented at the local level. Specifically, we visited USPS district offices, post offices, contract postal units (CPU), and Approved Shippers in Detroit Lakes and Richwood, Minnesota, and Fargo, North Dakota (Dakotas District); Conroe and Katy, Texas (Houston District); Arlington and Winchester, Virginia (Northern Virginia District); and Miami and Stuart, Florida (South Florida District). We selected the Dakotas, Houston, and South Florida districts because for fiscal years 2009 and 2010 their revenue from retail alternatives, growth in alternative revenue, and percentage of retail revenue from alternatives were higher than average. We chose the specific locations we visited to include a more urban and a moral rural location in each district based on 2000 census data; locations where there were generally CPUs, Approved Shippers, and self-service kiosks present; and locations with higher than average revenue. We also interviewed USPS partners, contractors, and representatives of key groups of affected by USPS retail efforts to obtain their views on USPS’s efforts to provide access through retail alternatives (see table 1). We identified these stakeholders through reviews of USPS regulatory proceedings and prior GAO and USPS Inspector General reports and recommendations from other stakeholders and experts. To determine how USPS’s efforts to expand retail access support its strategic goals of improving service and financial performance, we reviewed USPS’s strategic planning documents to identify how efforts to expand retail alternatives are linked to its strategic goals and what the related performance measures are. We also reviewed USPS’s requirements under the Government Performance and Results Act of 1993 and prior GAO reports on strategic planning to achieve agency goals. We identified differences in services provided through retail alternatives and post offices using information obtained from USPS officials, site visits, and our examination of usps.com. To assess USPS’s plans to redesign its retail Web site, we reviewed literature on best practices for retail Web site design. We analyzed available USPS data on revenue from retail alternatives for fiscal years 2006 through 2010 to identify trends and permit comparisons with revenue from post offices. USPS data on revenue from retail alternatives comes from various USPS sources that maintain the data in different ways and are therefore not comparable. Specifically, the data we provide about overall retail revenue, as well as revenue from post offices and retail alternatives in general, come from USPS’s audited accounting database, which contains revenue data that have been adjusted to account for factors such as customer returns, lost inventory, and how USPS counts revenue for stamps sold but not yet used. Because of these adjustments, USPS officials said this data source is more accurate for reporting overall retail revenue. In contrast, data on revenue from specific alternatives comes from a variety of USPS data sources, including databases of gross sales data that have not been reconciled in the same manner as the accounting database. Neither source could provide revenue data both overall and for specific retail alternatives, since the accounting database does not break out revenue for all of the types of alternatives we examined, and the sales database does not include revenue from post offices and CPUs that do not report revenue through an electronic point-of-sale transaction system. Furthermore, there is some overlap in the revenue data from specific retail alternatives. This occurs because some retail partners obtain the stamps they sell from USPS’s Stamps on Consignment program, which is counted as revenue under that program, and then the partners report all postal sales, including stamp sales, to USPS, thus creating the potential to double-count some revenue when using the gross sales data. Further affecting our ability to complete planned analyses were substantial delays in receiving responses to our requests for data from USPS. For example, delays of several months precluded planned analyses of trends in the number of specific types of alternative retail outlets in different geographic areas, differences in the types of products and services sold through different retail outlets, and trends in revenue for the specific locations and districts we visited. According to USPS officials, major staff restructuring that occurred while we were conducting our audit made it difficult for USPS to respond in a timely manner. Consequently, we scaled back our data analysis to focus on trends in revenue and the number of locations for retail outlets overall and for the specific types of retail alternatives that were the focus of our review. We assessed the reliability of each of the data sources we used by interviewing USPS officials responsible for them or sending USPS questionnaires to obtain written answers about its procedures for maintaining the data and verifying their accuracy. After reviewing this information, we determined that the revenue data for post offices, usps.com, self-service kiosks, CPUs, Approved Shippers, stamp retailers, retail alternatives in general, and retail services overall were sufficiently reliable for presenting rounded figures of USPS revenue. Additionally, we obtained data from USPS officials about the number of outlets for each type of retail alternative. Although we did not verify the accuracy of these data, we believe they are sufficiently reliable to provide context for the relative number of alternative outlets offering access to USPS products and services. We reviewed USPS’s estimates of the cost per revenue dollar earned from retail alternatives and discussed the methodology used for the estimates with USPS officials. We compared those estimates to estimates of post office costs prepared by USPS contractors, although we did not review the post office estimates in depth because it was beyond the scope of this review. To determine how USPS communicates with customers about the availability of its products and services at retail alternatives, we reviewed USPS documents such as communications strategies and presentations and interviewed USPS officials responsible for developing and implementing public communications and advertising strategies for retail alternatives. We also reviewed USPS public communications, such as press releases, reports, and other information on the USPS Web site. We reviewed our prior reports on effective public communication by government agencies. To determine what actions USPS has taken to oversee third parties that provide access to postal products and services, in addition to the site visits previously discussed, we reviewed USPS documents and interviewed USPS officials in headquarters and at the local level to determine how USPS recruits, contracts with, trains, and monitors retail partners. In examining USPS’s efforts to expand access through retail alternatives, we focused on specific aspects of management related to program goals and measures, use of data for decision making, guidance and training, risk assessment and monitoring, and public communication. We did not assess other aspects of management, such as project planning or management of financial systems. We conducted this performance audit from December 2010 to November 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Created in 1994 as an information site, usps.com was later expanded to include retail components that today offer access to online shipping services and postal products, including stamps. Beginning in 2009, USPS began an overhaul of its Web site to improve its infrastructure and customer interface with an overall goal of improving customers’ experience with the site. The first phase of the redesigned site was released in July 2011, and according to USPS planning documents, other new functions are expected to be released through early 2012. Because the redesign was still ongoing during our audit work, we were unable to evaluate the new Web site, but examined whether the intended functionality of the site is consistent with industry best practices for retail Web sites. Planned improvements to usps.com generally align with industry best practices for better serving customers, as shown in literature on retail Web site design. Table 2 outlines how USPS’s plans for the Web site redesign correspond to industry best practices. In addition to the individual named above, Heather Halliwell, Assistant Director; Jameal Addison; Leia Dickerson; Patrick Dudley; Bess Eisenstadt; Andrew Huddleston; Sara Ann Moessbauer; Josh Ormond; Friendly Vang-Johnson; and Crystal Wesco made key contributions to this report. U.S. Postal Service: Mail Trends Highlight Need to Fundamentally Change Business Model. GAO-12-159SP. Washington, D.C.: October 14, 2011. U.S. Postal Service: Actions Needed to Stave off Financial Insolvency. GAO-11-926T. Washington, D.C.: September 6, 2011. U.S. Postal Service: Dire Financial Outlook and Changing Mail Use Require Network Restructuring. GAO-11-759T. Washington, D.C.: June 15, 2011. U.S. Postal Service: Foreign Posts’ Strategies Could Inform U.S. Postal Service’s Efforts to Modernize. GAO-11-282. Washington, D.C.: February 16, 2011. U.S. Postal Service: Strategies and Options to Facilitate Progress toward Financial Viability. GAO-10-455. Washington, D.C.: April 12, 2010. U.S. Postal Service: Financial Crisis Demands Aggressive Action. GAO-10-538T. Washington, D.C.: March 18, 2010. U.S. Postal Service: Financial Challenges Continue, with Relatively Limited Results from Recent Revenue-Generation Efforts. GAO-10-191T. Washington, D.C.: November 5, 2009. U.S. Postal Service: Restructuring Urgently Needed to Achieve Financial Viability. GAO-09-958T. Washington, D.C.: August 6, 2009. U.S. Postal Service: Broad Restructuring Needed to Address Deteriorating Finances. GAO-09-790T. Washington, D.C.: July 30, 2009. High-Risk Series: Restructuring the U.S. Postal Service to Achieve Sustainable Financial Viability. GAO-09-937SP. Washington, D.C.: July 28, 2009. U.S. Postal Service: Network Rightsizing Needed to Help Keep USPS Financially Viable. GAO-09-674T. Washington, D.C.: May 20, 2009. U.S. Postal Service: Escalating Financial Problems Require Major Cost Reductions to Limit Losses. GAO-09-475T. Washington, D.C.: March 25, 2009. U.S. Postal Service: Deteriorating Postal Finances Require Aggressive Actions to Reduce Costs. GAO-09-332T. Washington, D.C.: January 28, 2009. U.S. Postal Service: USPS Has Taken Steps to Strengthen Network Realignment Planning and Accountability and Improve Communication. GAO-08-1022T. Washington, D.C.: July 24, 2008. U.S. Postal Service: USPS Needs to Clearly Communicate How Postal Services May Be Affected by Its Retail Optimization Plans. GAO-04-803. Washington, D.C.: July 13, 2004.
Declines in mail volume have brought the U.S. Postal Service (USPS) to the brink of financial insolvency. Action to ensure its financial viability is urgently needed. Visits to post offices have also declined, and in an effort to cut costs, USPS is considering closing nearly half of its 32,000 post offices by 2015. In their place, alternatives to post offices, such as the Internet, self-service kiosks, and partnerships with retailers, are increasingly important for providing access to postal services. Retail alternatives also hold potential to help improve financial performance by providing services at a lower cost than post offices. As requested, this report discusses how (1) USPS's efforts to expand access through retail alternatives support its service and financial performance goals, (2) USPS communicates with the public about retail alternatives, and (3) USPS oversees its retail partners. To conduct this work, GAO analyzed USPS documents and data and interviewed USPS officials and stakeholders. GAO also interviewed operators of postal retail partnerships. USPS has expanded access to its services through alternatives to post offices in support of its goals to improve service and financial performance. Retail alternatives offer service in more locations and for longer hours, enhancing convenience for many customers, but certain characteristics of these alternatives could be problematic for others. For example, services obtained from some alternatives cost more because of additional fees, which could deter use by price-sensitive customers. Furthermore, although about $5 billion of its $18 billion in fiscal year 2010 retail revenue came from alternatives, USPS officials said it is too early to realize related cost savings. USPS also lacks the performance measures and data needed to show how alternatives have affected its financial performance. A data-driven plan to guide its retail network restructuring could provide a clear path for achieving goals. Without such a plan, USPS may miss opportunities to achieve cost savings and identify which alternatives hold the most promise. USPS has sought to raise customers' awareness by developing media campaigns, enhancing its online tools for locating postal access points, and creating standard symbols for post offices and retail alternatives to show which products and services they offer. However, USPS has not assessed whether its message is reaching its customers, such as by using one of its existing customer surveys, and therefore does not know to what extent customers are aware of and willing to use its various retail alternatives. Although the public increasingly uses postal retail alternatives, more widespread adoption will be needed if USPS is to close thousands of post offices as planned in the next few years. USPS has projected that by 2020 alternatives to post offices will account for 60 percent of its retail revenue. USPS's oversight of its retail partners, which includes entering into written agreements with them and providing training and guidance, could be improved if USPS modified its approach to monitoring compliance with its procedures. Local USPS officials are supposed to conduct quarterly reviews of retail partners to make sure they are following mailing procedures, but according to retail partners and USPS officials in field and local offices, these reviews do not always occur as often as intended because of resource constraints. A risk-based monitoring approach would allow targeting limited USPS oversight resources to areas of concern and thus address issues that could otherwise discourage customers from adopting retail alternatives, such as inadequate service. USPS should develop a plan to guide its retail network restructuring that is supported by relevant performance measures and data and includes a method to assess the effectiveness of its public communication strategy. USPS should also implement a risk-based approach to monitoring retail partners. USPS reviewed a draft of this report and stated it is developing a plan to guide its retail network restructuring and agreed to review how it monitors retail partners.
DOT submits biennial reports, called Conditions and Performance Reports, to the Congress, detailing the state of the nation’s highways, bridges, and other surface transportation systems along with investment requirements for these systems. In developing its portion of the report, FHWA bases its estimates of investment requirements for most highways on the Highway Economic Requirements System (HERS) computer model. Before using the HERS model, FHWA used an engineering model that compared highway conditions with engineering standards, identified deficiencies, and calculated investment needs by totaling the costs of fixing all the deficiencies. In contrast, the HERS model compares the relative costs and benefits associated with potential highway improvements, such as widening or resurfacing, to identify those that are economically justified. The HERS model begins by assessing the current condition of the highway sections in its database. It then projects the future condition and performance of the highway sections on the basis of expected changes in factors such as traffic, pavement condition, and average vehicle speed. (See fig. 1.) The model identifies deficient highway sections, ranks improvements by economic merit (benefits exceeding costs), and then selects improvements. Benefits considered include reductions in factors like travel time, vehicle operating costs, accidents, and vehicle emissions over the lifetime of the improvement, while costs considered include the capital expenditures required to construct the improvement. The total cost of constructing selected improvements represents the future investment requirement for highways included in the HERS model. FHWA can calculate these costs on the basis of several different scenarios. For example, under the “economic efficiency” scenario, the model selects and implements all the improvements for which benefits exceed costs. Under the “maintain current (pavement) conditions” scenario, the HERS model selects and implements the least costly mix of improvements that would maintain average pavement conditions. Under a third scenario, designed to address road congestion, the HERS model selects the least costly improvements that would maintain current travel times. To run the HERS model, FHWA uses highway condition and performance data that each state collects and annually updates on a sample of highway sections representing different highway classes. The highway sections range in length from 1 block to 10 miles. States are to report detailed highway data for sampled highway sections. The data include information on highway capacity, traffic volume, pavement roughness, lane widths, and other physical characteristics. In addition to collecting these data, the states develop forecasts of traffic growth for each section. We reviewed the HERS model and reported in June 2000 that HERS provided the Congress with a more useful and realistic estimate of needed highway improvements than earlier models had. In particular, we found that a major strength of the model is its ability to assess the relative benefits and costs associated with making alternative highway improvements. In addition, an expert panel of economists and engineers from the public and private sectors convened by FHWA in June 1999 found that FHWA has strengthened HERS over time and that recent refinements have increased the model’s applicability and credibility. Nonetheless, we found that the HERS model also has some limitations. First, since the model analyzes each highway section independently rather than the entire transportation system as a whole, it cannot reflect how changes in one part of the system might affect another part of the system, such as how traffic might be redistributed as improvements are made. Second, the HERS model uses a computational “shortcut” to approximate the lifetime benefits associated with an improvement. Several transportation modeling experts have questioned whether this approach reasonably approximates future benefits. Third, because the HERS model is not designed to quantify the uncertainties associated with its methods, assumptions, and data, the model cannot estimate the full range of uncertainty within which its estimates vary. Finally, the model excludes certain classes of the nation’s highways from its analysis, meaning that FHWA must use alternate methods to forecast investment needs for these classes of highways. Like FHWA, state departments of transportation undertake planning and reporting activities to manage their highways and determine their capital needs. For example, under federal transportation planning requirements, states must carry out a process for considering the effect of transportation projects on a variety of factors, including the economy and the environment. States are also required to develop both long-range plans covering at least 20 years and transportation improvement programs (state investment plans) that cover at least 3 years. These requirements help ensure that state transportation projects come from a systematic planning process rather than from a “wish list” of transportation projects. To meet these planning and reporting requirements, some state DOTs have had to rely on their technical capabilities. Many states have developed pavement management systems to help them systematically analyze data on existing highways and project future pavement needs. For example, several states have used models, based on pavement engineering criteria, to analyze pavement needs either at the project level or for a whole statewide network. Some states have also adopted a predecessor of the HERS model developed by FHWA, called the Analytic Process model, that compares highway conditions with engineering criteria to identify potential improvements. After FHWA developed the HERS model, two states contracted to have customized state-level HERS models developed for them. Oregon DOT, when updating its long-range statewide highway plan, hired the same consultant that had produced HERS for FHWA. That consultant recommended that Oregon DOT use a customized version of the HERS model for its statewide plan. Similarly, when Indiana DOT engaged the same consultant for a corridor planning study, the consultant recommended that Indiana DOT use a customized version of the HERS model for its corridor planning analysis. Indiana DOT subsequently used its model’s results to draft a new statewide highway plan. After its positive experience with the national-level HERS model and the model’s successful adaptation in Oregon and Indiana, FHWA began to formally develop HERS-ST in 1999. FHWA expects that states will use the model in a variety of ways to facilitate planning for highway investment. Our review of the national-level HERS model showed that its results provide legislative and executive branch officials with useful information for decisions about highway investments. Legislative branch officials said they use the estimates to obtain general information on the nation’s need for infrastructure investments and find the HERS estimates more useful than previous estimates that were based on engineering analyses alone. FHWA views the national-level HERS model as a step forward in its efforts to meet the statutory requirement to report on the conditions and performance of the nation’s highways and future national highway investment requirements. FHWA officials also said that the HERS model’s benefit-cost approach complies with an executive order that requires federal spending for infrastructure to be based on a systematic analysis of expected benefits and costs. FHWA concluded that state transportation and other officials might find HERS-type analysis helpful in analyzing highway investments as well as supporting federal planning requirements. Facing increased funding constraints along with a greater demand for expenditure accountability, Oregon officials made use of a customized HERS model to prioritize needs and determine deficiencies in its highway system. Oregon officials cite their HERS model’s effective use of benefit- cost analysis as a foundation for determining the best combination of improvements and for allocating resources between programs. Oregon officials have found these benefit-cost results useful for highway planning, corridor planning, and goal setting. For example, when analyzing the 1999 Oregon Highway Plan (an element of the required long-range plan), state officials evaluated investment tradeoffs between system preservation projects—capital projects that ensure that a highway continues to serve its intended purpose—and modernization projects—capital projects that typically increase capacity. Oregon’s report said that this analysis helped the Oregon Transportation Commission gain a clear picture of the condition of the highway system under different funding scenarios and thus helped the Commission make difficult investment decisions. (See app. II for information on the technical features of the Oregon model.) Indiana’s DOT sought out a modified version of the HERS model in an attempt to improve its planning process and, more specifically, to strengthen its technical planning tools. Indiana officials wanted a model that would analyze benefits and costs for all of the state’s highway projects, and they decided that a modified version of the HERS model would meet their needs. These officials used their HERS model to analyze highway investment needs over a 25-year period, including a comparison of the status of the highway system at different levels of funding. In addition, Indiana officials used their model to analyze highway investment needs at the district level within the state. The Indiana model has a unique feature that links specific model results with a geographic information system (GIS) that visually displays results on state highway maps. This feature allows the staff to compare district offices’ and metropolitan planning organizations’ priorities with the ones the model identifies. (See app. II for information on the technical features of the Indiana model.) After FHWA officials reviewed their positive experience with HERS, along with the positive experiences of Oregon and Indiana with their customized HERS models, they decided to consider developing a HERS model that all states could use. FHWA’s Office of Asset Management commissioned two studies to identify the potential role of a HERS model in helping states assess their highway investment needs and develop state highway plans. The studies demonstrated a potential state interest in a state-level HERS model. Therefore, FHWA developed a prototype state model, HERS-ST, from the national-level HERS model that any state could use for planning and programming activities. FHWA officials believe that states could use the HERS-ST model to perform benefit-cost analysis on highway improvements and to forecast the future condition and performance of state highway systems. In addition, the Office of Asset Management’s Asset Management Primer explains that HERS-ST has the potential to help state-level policy makers address resource allocation questions because the model can analyze “what if” questions using specific funding levels. For example, the model can show the long-term effects that different levels of spending or different emphases in investment could have on the condition and performance of highways. The primer also states that the model may even help some states meet new Government Accounting Standards Board provisions requiring states to report the cost of maintaining their transportation infrastructure assets. The HERS-ST model that FHWA developed is based on and operates in much the same way as the national-level HERS model, with a few noteworthy differences. Like the HERS model, HERS-ST (1) projects the future condition and performance of a state’s highway system, (2) assesses whether any highway improvements are warranted, and (3) selects appropriate improvements using benefit-cost analysis. One difference between the HERS and HERS-ST models is that the HERS-ST model has an “override” feature that allows a state official to override highway improvement selections made by the model in order to reflect specific, local conditions. According to FHWA officials, the model’s override feature will enable state officials to apply specific knowledge about highway improvements (such as whether implementing a particular improvement is feasible) that may not be reflected in the model’s database. For example, an official might specify that the model reconstruct a highway section rather than resurface it because of problems with the underlying structure of the pavement that are not yet apparent from measurements of the pavement’s roughness. The override feature is unique to the HERS-ST model. Another difference is that the HERS-ST model is capable of providing detailed results about each of the highway sections it analyzes, including information on the particular improvement selected, the expected future condition of the section, and the benefits and costs of making the improvement. By contrast, the HERS model generates only summary results for the classes of roads it analyzes. In addition to these differences between the two models, the HERS-ST model offers states further options regarding what data to consider. State officials can adjust the HERS-ST model to reflect state conditions by, for example, using state highway construction costs rather than national average costs. And state officials may use HERS-ST either to analyze the statistical sample of their state’s highways included in FHWA’s Highway Performance Monitoring System (HPMS) database or, if they have the appropriate data, to analyze all highway sections in the state’s system. While the HERS model could also analyze all highway sections, it is currently limited to analyzing only the sample of sections in the HPMS database. When the HERS-ST model’s projections are based on sampled sections in the HPMS database, the projections may not account for all the highways for which a state department of transportation is responsible. However, the state can, if it has appropriate data on all its highway sections, use the HERS-ST model to analyze every section in the state highway system, as Oregon and Indiana did with their customized HERS models. (See app. II for a more detailed comparison of the national-level HERS and HERS-ST models.) FHWA distributed the prototype HERS-ST model software to 20 states volunteering to participate in its pilot project, which is intended to gauge interest in the model and to further identify potential uses for and revisions to it. Interest in the model was higher than FHWA expected. According to an FHWA official, the agency expected to have five states participate in the pilot. However, the number of interested states grew to 20, including Indiana and Oregon, before the pilot began. (See fig. 2.) Indiana and Oregon officials said they wanted to participate in the pilot program to learn about new features incorporated into the HERS-ST model and to share their customized HERS model experience with other pilot states. In December 2000, FHWA distributed the model, along with technical manuals and state-specific sample data on highway sections needed to run the model, to the 20 pilot project states. This distribution took place about 2 months before the pilot’s February 2001 kickoff workshop in New Orleans, Louisiana. The workshop was designed to train participating states in the use of the HERS-ST model. It included general information on the use of the model, information on Indiana’s and Oregon’s experiences with their customized HERS models, and technical review and training. FHWA officials plan to focus their efforts during the pilot program on providing technical support to participating states. FHWA officials also hope to provide training for state policymakers to explain how the HERS- ST results can be used. FHWA anticipated that the pilot project would conclude after approximately 2 months. However, the agency was prepared to extend the duration of the pilot if states indicated that additional time would be helpful. At the conclusion of the pilot, participants will be asked to report on (1) their experiences testing the model, (2) their assessment of the model’s usefulness in state planning and programming activities, and (3) their recommendations for further FHWA initiatives with respect to the model. FHWA expects to report by August 2001 on states’ comments and its own recommendations for further HERS-ST model initiatives. FHWA officials said that the agency will consider changes to the HERS-ST model at the end of the pilot project, depending on the number of states that identify particular changes as important. Officials from almost all of the eight states we randomly selected indicated that although they had limited knowledge about HERS-ST, they were looking forward to expanding their states’ technical tools to better support their planning processes. When asked why they planned to participate, the state officials said that, while they did not have details of how the model works, they did not want to miss out on any tool that might improve their planning and highway management. In general, the state officials also expressed some level of dissatisfaction with their current planning tools. As one state official explained, her DOT was always looking to improve its planning process. (See app. III for the results of our discussions with state officials about the HERS-ST model.) A number of state officials indicated that the HERS-ST model’s benefit-cost analysis capability is an important feature that made the model attractive to them. In response to a question about why states wanted to participate in the pilot, officials from most of the states said that they hoped the model would help improve their knowledge about the economic impact of investment decisions. Officials from five of these states believe this would help the states prioritize projects and maximize the effect of their spending. An official in one state said that the state’s highway funding depends, in part, on a study of infrastructure needs. However, the state’s infrastructure study is based on the assumption that highway funding is unlimited. Thus, the official believes the results of the needs study are unrealistic. The official hopes HERS-ST can contribute economic reality to the state highway funding plan. When presented with a list of potential uses for the HERS-ST model results, state officials we interviewed said that, if the model provided realistic results, they would consider using the results in the following tasks: comparing benefits and costs of making alternative highway developing state highway plans, such as state transportation investment plans, long-range highway plans, local highway needs forecast assessments, and corridor studies; satisfying the requirements of the Government Accounting Standards Board’s provisions for reporting on the value of transportation infrastructure assets; allocating funds to offices within the state highway agency (for example, by district). (See app. III for a more detailed list of potential uses.) For example, one state official indicated that his state plans to update its long-range highway plan shortly and hopes that HERS-ST may be useful for that work. Overall, officials indicated that the three most important uses for their states would probably be (1) performing benefit-cost analysis of alternative highway improvements, (2) developing or refining state transportation investment plans, and (3) assessing highway needs forecast by state district offices or local agencies. If states involved in the pilot project find that the HERS-ST model is useful, FHWA expects to upgrade it for future state users. First, FHWA plans to make certain changes to the HERS-ST model to keep it current with analytical improvements planned for the national-level HERS model. Second, FHWA is considering changes designed to make the model easier for states to use. Finally, states might also ask that FHWA enhance the HERS-ST model so it can analyze more detailed highway information. According to FHWA officials, if the pilot participants find the HERS concept attractive, FHWA will, as appropriate, provide for revising the HERS-ST model so that it will benefit from upgrades to the national-level version of the model. FHWA officials said their improvement plans for the national-level version of HERS include eliminating the computational shortcut that we identified as a limitation in our June 2000 report. This shortcut is designed to approximate the lifetime benefits associated with a highway improvement. However, the approximation may not fully represent the lifetime benefits, and FHWA officials acknowledge that improvements in computing power have made it unnecessary. FHWA also plans to change the national-level HERS model by incorporating pavement performance data based on climate zones instead of assuming one rate of pavement deterioration, revising its highway-capacity analysis to reflect changes in the Transportation Research Board’s Highway Capacity Manual, revising the emissions data used as soon as the Environmental Protection Agency finishes revising its emissions model, and updating pavement improvement costs, currently based on 1988 data, to represent 1998 or 1999 data. As part of its evaluation of the pilot project, FHWA plans to ask state officials for suggestions of potential improvements to the model. Assuming the project continues past the pilot phase, FHWA officials say they will consider making those changes that will benefit multiple states. Our interviews with state, FHWA, and other officials indicate that states may ask FHWA to modify HERS-ST in ways that make the model easier to use without altering the model’s analytical structure. One state official expressed concern over the user-friendliness of the model, having heard that the HERS-ST program is not user-friendly because it operates in an older DOS-based computer environment that department staff might not be familiar with. An FHWA consultant reviewing the HERS model concluded that updating the model so that it can operate in a more user-friendly menu- driven environment might be the key to increasing the number of states that use the model. FHWA officials agreed that a menu-driven program would make the model easier for states to use. The HERS-ST model would also be easier for states to use if it accepted highway data in the same format that states use in their annual data submissions for FHWA’s HPMS database. The HERS-ST model requires input in the 1993 data reporting format, not the current HPMS format. To assist states participating in the pilot project, FHWA provided each one with its highway data already reformatted for use with the HERS-ST model. However, state officials wishing to analyze other highway sections in their states would have to reformat their data to the older format before the model could use it. An FHWA consultant, commenting on ways that the HERS model could be more useful to states, recommended that the model accept data corresponding to the latest format that FHWA requires for state HPMS data submissions. FHWA officials recognize that widespread use of the HERS-ST model would require addressing this situation. Our interviews with state and FHWA officials indicate that some states would like the HERS-ST model to analyze more detailed pavement management data. Many states have developed sophisticated pavement management systems that analyze more data than the pavement deterioration analysis done in the HERS or HERS-ST models. For example, a number of states already have pavement management systems that consider several types of pavement distress data. HERS-ST, like the HERS model, relies on data states report in the form of the International Roughness Index or the Present Serviceability Rating. Officials from four of the states we spoke with reported that they collect both roughness index data and serviceability rating data. However, these officials noted that they do not use roughness index data for planning purposes, preferring to rely on their serviceability rating or the other data for highway system planning. Officials from half of the states we contacted said they only collect roughness index data at FHWA’s request and they base their internal planning analysis on pavement rating data in their pavement management systems. In addition, officials from two states said they were not satisfied with the quality of their states’ roughness index data and preferred to rely on their pavement rating data. FHWA officials also said they expect to address some of these concerns by incorporating more pavement distress data in the HERS model at some point in the future. However, they will not do so until such data are available to FHWA from all the states. FHWA officials said they are willing to support only one version of the HERS-ST model. But because states use various pavement distress measures, it is not clear to FHWA officials whether including these additional pavement data in the HERS-ST model would satisfy all states’ concerns. We provided a draft of this report to the Department of Transportation for review and comment. Officials from the Department generally agreed with the report. These officials also provided technical and clarifying comments, which we incorporated into the report as appropriate. We conducted our review from June 2000 through February 2001 in accordance with generally accepted government auditing standards. We will send copies of this report to cognizant congressional committees; the Honorable Norman Y. Mineta, Secretary of Transportation; and the Administrator, Federal Highway Administration. If you or your staff have any questions about this report, please contact me at (202) 512-2834. Appendix IV lists key contacts and contributors to this report. To determine why the Department of Transportation’s (DOT) Federal Highway Administration (FHWA) developed a state-level version of the Highway Economic Requirements System (HERS) computer model and how FHWA expects that states will use the model, we first reviewed our work and resulting June 2000 report on the strengths, limitations, and uses of the national HERS model. We then interviewed FHWA officials about their state-level HERS model (HERS-ST). We also reviewed FHWA documents about the HERS-ST model and projects in FHWA’s Office of Asset Management. Finally, FHWA officials and HERS contractors told us that two states—Indiana and Oregon—were using state-level HERS models. We visited Indiana and Oregon to discuss the use of these models with officials in the Indiana and Oregon state departments of transportation and obtained and reviewed available model documentation and state products generated using their HERS models. To determine how FHWA is making the state-level HERS model available to states, we spoke with FHWA officials about their pilot-project plans. We also reviewed the pilot project workshop agenda and attended the workshop in February 2001. We reviewed HERS-ST model documents, including the draft Highway Economic Requirements System Technical Manual and the draft Highway Economic Requirements System Users Manual, and we talked with model developers to determine how the model was developed. Finally, we reviewed FHWA’s evaluation plan for the HERS- ST pilot project and the time frame for the project. To determine how states expect to use the HERS-ST model and its results, we reviewed reports by FHWA consultants on the potential role of HERS in state-level investment decisions and talked with officials from a random selection of 8 of the 16 states that planned to participate in FHWA’s pilot project. The 16 states represent all states that FHWA reported were planning on participating in FHWA’s pilot program as of September 5, 2000, with the exception of Indiana and Oregon. We excluded Indiana and Oregon from this random sample because both states are already using customized state-level HERS models, and we were already planning to conduct site visits for these two states. Table 1 shows the 8 states we contacted, as well as the 16 states from which we chose the sample. To obtain consistent information from the eight states we contacted, we used a semi-structured interview format. See appendix III for a copy of the interview document with the results of our discussions with the eight states. As of December 11, 2000, the number of states that planned to participate in FHWA’s HERS-ST pilot had grown to 20. See figure 2 in the letter for a map of the 20 states. To identify potential changes that could be made to the model, we discussed this issue with a wide variety of groups, including FHWA officials, the consultant who developed the HERS-ST and the Indiana and Oregon HERS models, state officials using the Indiana and Oregon models, state officials planning on using the HERS-ST model, and others, such as academics, who have used the HERS model. We also reviewed information on pavement measurement data, including our previous work on pavement measures. This appendix describes technical aspects of the HERS computer model and the three related models designed for use by state highway planners. The HERS model simulates infrastructure improvement decisions for the highways it models by comparing the relative benefits and costs associated with alternative improvement options. In conducting its analysis, HERS uses an extensive set of data that are primarily collected and updated by the states and maintained by FHWA in the Highway Performance Monitoring System database. In addition, the HERS model performs its analysis using several submodels representing specific highway processes, including traffic growth, pavement wear, vehicle speed, accidents, and highway improvement costs. The analysis, which is based on the current condition of the highway system, is conducted over four 5-year periods, for a total of 20 years. The HERS model draws information from the database and analysis from the submodels to identify deficient sections, evaluate alternative improvement options, and select and implement improvements. HERS uses benefit-cost ratios (benefits divided by costs) to evaluate and select improvements under several investment scenarios that FHWA developed. The benefits include reductions in travel times, vehicle operating costs, and agency maintenance, while the costs include the capital expenditures necessary to construct the improvement. The model reports its results in a series of tables showing the cost of improvements needed to support the model’s investment decisions for each highway class and funding period analyzed. The HERS model has several strengths: The model’s major strength is its ability to assess the relative benefits and costs associated with alternative options for making improvements on the nation’s highways. The HERS model selects for implementation only those improvements that are economically justified according to its analysis, a significant improvement over FHWA’s previous methods, which used engineering standards to identify deficiencies and select improvements without regard to economic merit. Another strength of the HERS model is that FHWA has consulted with experts in order to assess the model’s reasonableness and improve it. For example, in June 1999, FHWA convened an expert panel consisting of economists and engineers from the public and private sectors. This panel found that FHWA has strengthened the model over time and that the recent refinements have increased its applicability and credibility. The HERS model has some limitations: First, because the HERS model analyzes each highway section independently rather than the entire transportation network, it cannot completely reflect changes occurring among all highways and modes in the transportation network at the same time. For example, it will not reflect how, as improvements are made, traffic might be redistributed from other existing highway sections to an improved highway section. By incorporating price elasticity into the model, FHWA officials assume that the model captures the net effect of all changes in the transportation network as well as in the overall economy. Although the implication of this limitation is unclear (it may over- or under-state the effect of changes in traffic resulting from a highway improvement), explicitly modeling the entire transportation network is not possible with the current state of the art in modeling or available data. Second, because the HERS model is not designed to quantify the uncertainty associated with its methods, assumptions, and data, the model cannot estimate the full range of uncertainty within which its estimates vary. As a result, the precision of the model’s estimates is unknown. The HERS model’s estimates rely on a variety of estimating techniques and hundreds of variables, all of which are subject to some uncertainty. However, changing the model to fully account for uncertainty in its factors is not likely to be cost-effective because it could require extensive and expensive reprogramming. We recommended in our June 2000 report that FHWA clarify, when publishing the results of HERS model analyses, that there is uncertainty associated with the results. State-level users can account for some uncertainty by conducting “sensitivity analyses” to measure how much the model estimates change when the values of certain key inputs or assumptions used in the model are changed. Third, the HERS model uses a computational “shortcut” to approximate the lifetime benefits associated with an improvement. Conceptually, benefits such as reductions in travel time accrue over each improvement’s full lifetime, 20 years or more. However, in its initial evaluation of whether to improve a highway section, the HERS model calculates benefits only during the first 5-year period. To account for the benefits accruing after the first 5-year period, FHWA developed a shortcut that essentially uses an estimate of the improvement’s construction cost as a proxy for the improvement’s remaining future benefits. FHWA developed the shortcut several years ago, when limitations in computer processing power necessitated simplifying some of the calculations. Given recent improvements in computing power, FHWA officials plan to modify the HERS model to account for lifetime benefits and see correcting the shortcut as a potential improvement for the HERS-ST model as well. Fourth, although FHWA has taken steps to ensure that the data used in the HERS model are reasonable, some of these data vary in quality. For example, the model uses emissions data that may not be representative of actual conditions. To estimate the emissions associated with traffic on a given section, the model uses information from the Environmental Protection Agency (EPA) on emissions rates per vehicle type and speed. Vehicle emissions, however, may depend more on how the vehicle is driven than on the total miles driven. FHWA officials told GAO they will update these data once EPA finishes revising its emissions data. In addition, we reported earlier that the pavement roughness data reported by the states to FHWA are not comparable, partly because the states use different devices and approaches for measuring roughness. The HERS model uses the roughness data in projecting the pavement condition of each section. FHWA is supporting efforts to standardize states’ pavement roughness measurements. Moreover, some information used in the model is dated. For example, the pavement resurfacing costs used in the HERS model are based on 1988 data (adjusted for inflation from 1988 to 1997). FHWA officials said they plan to update the HERS model’s resurfacing costs, and the HERS-ST model offers users the option of introducing their own construction cost data. The Oregon Department of Transportation obtained the first customized HERS model in 1998. Oregon hired a consulting firm, Cambridge Systematics, Inc., to help the state develop a long-range statewide highway plan. The consulting firm, which also developed the HERS model for FHWA, worked with Oregon officials to customize the HERS model, which resulted in the creation of HERS/OR. Oregon never received specific documentation for its model. But according to Oregon officials and the consultant, the model differs from the national-level HERS model in the following ways: HERS/OR allows the user to override the model’s improvement decisions for specific sections, for example, for a road that cannot feasibly be widened due to a nearby mountain. HERS/OR’s output includes two innovations: a section-by-section report providing details on individual improvements for each segment for each funding period and a revised summary table of improvements for the state’s four unique highway classifications. HERS/OR’s procedures for analyzing price elasticity are rudimentary when compared with the current HERS model, and data on vehicle accident costs are older. The Indiana Department of Transportation contracted in 1998 for a customized HERS model known as HERS/IN. HERS/IN is also similar to HERS, but has more unique features than HERS/OR: HERS/IN, like HERS/OR, analyzes all sections of its state highway system. HERS/IN uses its own data on construction costs, allowing the model to base its estimates of construction costs on more exact, local data. HERS/IN is capable of using pavement improvement decisions from the state’s sophisticated pavement management system. However, Indiana DOT staff had not used this feature by the time we conducted our work. Unlike the national-level HERS model, HERS/IN allows its user choices for overriding modeled improvement decisions. For example, the user can specify the type of improvement, its cost, its timing, and the improvement’s effect on highway capacity. Indiana DOT has not used this feature, according to officials. HERS/IN’s output includes the basic national-level HERS output tables, plus section-by-section improvement tables like those of HERS/OR and tables that summarize highway improvements’ benefits for users due to decreased travel time, decreased operating costs, and increased highway safety. In addition, HERS/IN’s output is used to generate maps to display the location of its improvement plans. The model is designed to feed its output data into a geographic information system software package that produces maps of the model’s proposed improvements. The Indiana Department of Transportation officials said that this feature improves their ability to display the location of the HERS/IN model’s decisions to policymakers. Furthermore, the maps help the state staff determine whether or not HERS/IN’s decisions are realistic. For example, if two major improvements are proposed for nearby sections of highway, the maps could alert the agency that, to avoid traffic problems in that area, the projects should not be performed simultaneously. HERS/IN is able to consider the construction of new highways that might be needed to provide capacity for future travel demand. The Department has a sophisticated travel-demand model that supports this HERS/IN feature. Found in no other version of the HERS model, this feature allows Indiana DOT to specify new highways and the effect of capacity improvements on traffic systemwide, as well as to compare alternative improvements for addressing a capacity problem. Indiana DOT has not used this feature, according to officials. Unlike the HERS model, the HERS/IN model is not used to assess the effect of highway travel on the environment. According to state officials, the HERS/IN model could take environmental data into account when making its decisions, but the officials did not feel this feature was feasible in their model. The HERS-ST model is the most recently designed HERS model. Generally, HERS-ST offers the analytic approaches available in the most recent HERS model revision. Because it is based on the HERS model, it has the same strengths and limitations that were noted above. However, the HERS-ST model differs from the national-level HERS model in the following ways: Unlike the HERS model, the HERS-ST model has an “override” feature that allows a user to override some or all of the improvement decisions made by the model. For example, the user can specify the particular type of improvement to be made on a highway section in any particular funding period. In the override mode, the model selects the user- specified improvements regardless of whether they are economically justified. According to FHWA officials, the model’s override feature will enable state users to apply specific knowledge about highway improvements (such as whether implementing a particular improvement is feasible) that may not be reflected in the model’s database. In addition to the override feature, the HERS-ST model differs from HERS in the number of highway classes it can analyze and the level of detail of the results it generates. For example, the HERS-ST model can analyze highway sections from all 12 of FHWA’s classes of roads, including rural minor collectors and urban and rural local roads. The HERS model is designed to analyze sampled sections from 9 of the 12 highway classes. Also, the HERS-ST model is capable of providing the user with detailed results on the highway sections it analyzes, including information on the particular improvement selected, the expected future condition of the section, and the benefits and costs of making the improvement. FHWA officials stated that this feature would enable the state user to study what happens on individual sections. By contrast, the HERS model generates only aggregate results for classes of roads. Both the HERS-ST and the HERS models also use data from studies of the national economy. However, the state user can modify some of these data to reflect conditions in his or her state. For example, both models count as a benefit any reduction in travel time brought about by a highway improvement. In making this calculation, FHWA uses average national hourly compensation data from the Department of Labor’s Bureau of Labor Statistics to quantify the dollar value of travel time saved by travelers on work-related trips. In the HERS-ST model, the state user could substitute state-level data to derive an alternative estimate of travel time savings. The HERS-ST model also offers the state user a choice between analyzing a statistical sample of highways represented in FHWA’s HPMS database or the option of analyzing all highway sections in the state’s system. While the HERS-ST model pilot project is FHWA’s first attempt to promote state use of a HERS model, the agency previously released copies of the HERS model. The model’s existence was well publicized because it had been described in DOT’s biennial Conditions and Performance reports starting with the 1995 edition, it was profiled in studies, and it was cited in TEA-21. By 1998, FHWA was providing HERS model documentation and computer files to parties who requested them. FHWA reported that 18 requesters, including state DOT officials, academics, and consultants, obtained copies of the model between April 1998 and September 2000. Michigan DOT officials who obtained copies of the model found that it did not suit their needs. They said that the HERS model was not useful to them because it would not handle all of the roads the department needed to study; available data would need reformatting to work with the model; and the results were aggregated at the network level, which was too general to be useful for the state’s purposes. On the other hand, a researcher at North Dakota State University’s Upper Great Plains Transportation Institute found the HERS model useful for state-level applications. He analyzed intermodal freight diversion (rail to truck or truck to rail) on behalf of two state transportation agencies. He also used HERS equations to analyze rural highway preservation for a third state transportation agency. To compare key differences between the HERS model and related state- level models, see table 2. In addition to those named above, Richard Calhoon, Catherine Colwell, Timothy J. Guinane, Luann Moy, Judy K. Pagano, and Raymond Sendejas made key contributions to this report.
The Federal Highway Administration (FHWA) developed the state-level version of the Highway Economic Requirements System (HERS-ST) model as an investment-analysis tool for highway planning at the state level. FHWA officials believe that some state departments of transportation will find the analysis that the HERS-ST model produces useful because it demonstrates the potential results of highway investment decisions from an economic point of view. FHWA is conducting a pilot project for its prototype HERS-ST model with states that volunteered to test the model. FHWA distributed to these states HERS-ST software, technical manuals, and sets of state highway data with which to run the model. FHWA then provided an overall orientation and technical training and addressed states' questions during a workshop. Officials from a sample of the states planning to participate reported that they are primarily interested in taking advantage of the model's use of benefit-cost analysis to assess alternative highway improvements. If the pilot project shows that states view the HERS-ST model as a useful tool, FHWA expects to upgrade the model for future users. In doing so, it would consider both enhancements that have already been planned for the national-level HESR model and changes targeted specifically to HERS-ST. Changes to improve the HESR-ST model's usefulness to states include converting the model to a menu-driven system to improve its ease of use or revising the model's data input format so that it matches FHWA's current state highway data reporting requirements.
The Army’s ground-based military operations generally use two kinds of vehicles: combat vehicles designed for a specific fighting function and tactical vehicles designed primarily for multipurpose support functions. Most combat vehicles move on tracks—including the Abrams tank and the Bradley Fighting Vehicle—but some move on wheels, such as the Stryker. Tactical vehicles generally move on wheels, including the HMMWV and the JLTV. Most major defense acquisitions follow a structured acquisition process, which normally consists of three discrete phases: (1) technology development; (2) engineering and manufacturing development; and (3) production and deployment. Programs are expected to meet certain criteria at milestone decision points for entry into each phase. For anticipated major defense acquisition programs, like the GCV and the JLTV, the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD/ATL) generally serves as the Milestone Decision Authority. The Milestone Decision Authority is responsible for approving the programs’ entry into the defense acquisition system, approving entry into subsequent phases, and documenting the various approvals through acquisition decision memorandums. The Army’s GCV program is intended to modernize the current ground combat vehicle fleet, replacing a portion of the Bradley Infantry Fighting Vehicles currently in inventory. In February 2010, the Army issued a request for proposals for the technology development phase of the GCV before completing the required analysis of alternatives (AOA), citing schedule urgency. In May 2010, the Army convened a “Red Team” to assess the risk of achieving the GCV schedule. The Red Team issued its report in August 2010, citing major risk areas including schedule, technical maturity, and affordability of the system. The Army rescinded the original request for proposals and issued another in late 2010. The milestone A decision was expected in April 2011, but did not occur until August 2011 (see fig. 1). In August, the Army awarded technology development contracts to two contractor teams. A third contractor team submitted a proposal but did not receive a contract award and has filed a bid protest with GAO that is still being considered. The Army has been defining a strategy to develop, demonstrate, and field a common tactical information network across its forces. Generally, such a network is expected to act as an information superhighway to collect, process, and deliver vast amounts of information such as images and communications while seamlessly linking people and systems. The Army’s current strategy is to better understand current Army networking capabilities, determine capabilities needed, and chart an incremental path forward. The Army plans regular demonstrations as the network grows and its capability improves. The Army and Marine Corps generally define light tactical vehicles as capable of being transported by a rotary wing aircraft and with a cargo capacity of equal to or less than 5,100 pounds. Light tactical vehicles represent about 50 percent of the Army’s tactical wheeled vehicle fleet and currently consist of the HMMWV family of vehicles. The Army’s HMMWV program also provides vehicles to satisfy Marine Corps, Air Force, and other requirements. The JLTV is expected to be the next generation of light tactical vehicles and is being designed to provide the advances in protection, performance, and payload to fill the capability gap remaining between the HMMWV and MRAP family of vehicles. JLTV is being designed to protect its occupants from the effects of mines and improvised explosive devices without sacrificing its payload capability or its automotive performance, which has not been the case with the other tactical wheeled vehicles. The Army’s recent history with its acquisition programs was the subject of a review by a panel chartered by the Secretary of the Army. In its January 2011 report, the panel noted that the Army has increasingly failed to take new development programs into full-rate production. From 1990 to 2010, the Army terminated 22 major defense acquisition programs before completion. While noting many different causes that contribute to a program’s terminations, the panel found that many terminated programs shared several of the same problems, including weak trade studies or analyses of alternatives; unconstrained weapon system requirements; underestimation of risk, particularly technology readiness levels; affordability reprioritization; schedule delays; and requirements and technology creep. The panel made a number of recommendations to help make the Army’s requirements, resourcing, and acquisition processes more effective and efficient. Over the next 2 years during the technology development phase, the Army faces major challenges to identify a feasible, cost-effective, and executable solution that meets the Army’s needs. Among these are making choices on which capabilities to pursue and include in a GCV vehicle design and determining whether the best option is a new vehicle or a modified current vehicle. In our March 2011 testimony, we identified key questions about GCV pertaining to how urgently it is needed, robustness of the analysis of alternatives, plausibility of its 7-year schedule, cost and affordability, and whether mature technologies would be used. Since that time, the Army has moved the CGV program into the technology development phase. DOD and the Army have taken positive steps to increase their oversight of the program; however, the timely resolution of issues surrounding the areas previously identified will be a major challenge.  Urgency of need: The Army’s recent combat vehicle capability portfolio review confirmed the Army’s need for GCV as a Bradley Infantry Fighting Vehicle replacement and USD/ATL approved the GCV acquisition program. USD/ATL agreed that the Army has a priority need for a GCV but the number of caveats in the approval decision (as discussed below) raises questions about the soundness of the Army’s acquisition plans and time lines.  Analysis of alternatives: After initially bypassing completion of the AOA process, the Army subsequently conducted an AOA but was directed by USD/ATL to conduct more robust analyses, throughout the technology development phase, to include design and capability trades intended to reduce technical risks and GCV production costs. We have reported that a robust AOA can be a key element in ensuring a program has a sound, executable business case prior to program initiation and that programs that conduct a limited AOA tended to experience poorer outcomes—including cost growth. The Army is expected to include sensitivity analyses in the AOA to explore trade-offs between specific capabilities and costs. These analyses will be supported by assessments of existing combat vehicles to determine whether they are adequate alternatives to a new vehicle, or whether some of the designs or capabilities of existing vehicles should be incorporated into a new GCV. Concurrently, the GCV contractor teams will conduct design trades and demonstrate technologies, the results of which will also be fed back into the AOA updates.  Plausibility of 7-year schedule: The Army’s plan to deliver the first production vehicles in 7 years still has significant risk. Since GCV was originally conceived in 2009, the Army has already reduced some requirements and encouraged interested contractors to use mature technologies in their proposals. However, the schedule remains ambitious and USD/ATL has stipulated that the Army will need to demonstrate that the schedule is both feasible and executable. According to an independent Army program evaluator, the next 2 years of technology development will require many capability and requirements trades in order to better define an acceptable solution at the same time that technology risks for that solution are to be identified and mitigated. Concurrent activities can lead to poor results, calling into question whether the 7-year schedule is executable. The independent cost estimate submitted for the milestone A review featured higher GCV development costs with the assumption that the Army would need 9 or 10 years to complete the program, instead of the assumed 7 years.  Cost and affordability: Cost continues to be a challenge, as an independent cost estimate was at least 30 percent higher than the Army’s estimate for GCV procurement. USD/ATL has directed that continued program approval depends on the Army’s ability to meet the $13 million procurement unit cost target. As for affordability, with the expectation that less funding will be available in coming years, the Army has made some trades within the combat vehicle portfolio. According to Army officials, the Army plans to proceed with GCV as currently planned, but several other combat vehicle programs—such as anticipated upgrades for the Bradley, Abrams, and Stryker vehicles—are being reshaped or delayed.  Use of mature technologies: The Army encouraged interested contractor teams to use mature technologies in their GCV proposals. Due to the current bid protest, we do not have insight into what the contractor teams proposed in terms of specific critical technologies or their maturity. A DOD official stated, and we agree, that it will be important that technologies be thoroughly evaluated at the preliminary design review before the decision to proceed to the engineering and manufacturing development phase. The Army has taken a number of steps to put together a more realistic strategy to develop and field an information network for its deployed forces than the network envisioned for the Future Combat System program. However, the Army is proceeding without defining requirements for the network and articulating clearly defined capabilities. As a result, the Army runs the risk of developing a number of stovepipe capabilities that may not work together as a network, thus wasting resources. The Army has moved away from its plan for a single network development program under Future Combat System to an incremental approach with which feasible technologies can be developed, tested, and fielded. This planned approach reflects lessons learned and changes the way the Army develops, acquires, and fields network capabilities. Under this new approach, numerous programs will be developed separately and coordinated centrally, and network increments will be integrated and demonstrated in advance of fielding rather than the previous practice of ad hoc development and integration in the field. A key aspect of the implementation of the new approach will be aligning the schedules of the separate programs with the Army’s planned, semiannual field events, called network integration evaluations, where emerging technologies are put in soldiers’ hands for demonstration and evaluation. Several key aspects of the Army’s Network Strategy include: In our March 2011 testimony, we pointed out that roles and responsibilities for network development were not clear. Since then, senior Army leadership issued a directive detailing the collective roles, responsibilities, and functions of relevant Army organizations involved with the network modernization effort.  The Army is currently working to establish a comprehensive integrated technical baseline for the network and addressing prioritized capability gaps. With this baseline, the Army expects to build on elements of the network already in place with an emphasis on capturing emerging technologies that deliver capability incrementally to multiple units at the same time. This represents a significant departure from the previous practice of fielding systems individually and often to only one element of the operational force at a time (for instance, companies, battalions, or brigades).  The network integration evaluations are a key enabler of the Army’s new network strategy and assess systems that may provide potential benefits and value to the Army while identifying areas requiring additional development. The evaluation process provides the Army an opportunity to improve its knowledge of current and potential network capability. Additionally, it provides soldier feedback on the equipment being tested. For example, members of the Army’s network test unit, the Brigade Modernization Command, indicated that a number of systems tested should be fielded and other systems that should continue development. Several issues will need to be resolved as the Army implements its network strategy. For example,  The Army has not yet announced requirements nor has it established cost and schedule projections for development and fielding of its network. Since the Future Combat System termination, the Army does not have a blueprint or framework to determine how the various capabilities it already has will fit together with capabilities it is acquiring to meet the needs of the soldier. Even with an incremental approach, it is important for the Army to clearly articulate the capabilities the system is attempting to deliver. Without this knowledge, the Army runs the risk of acquiring technologies that may work in a stand-alone mode but do not add utility to the broader network strategy.  The network integration evaluation provided an extensive amount of data and knowledge on the current Army network and candidate systems for the network. However, since the network integration evaluation serves as an evaluation instrument, it is important to have test protocols that capture objective measures and data on the network’s performance. Two independent Army test oversight agencies, reflecting on the evaluation results, expressed concern over not having proper instrumentation for the overall evaluations; in particular, not having the necessary instrumentation to conduct operational tests on large integrated networks and not having clear network requirements.  Army officials are developing a strategy to identify, demonstrate, and field emerging technologies in an expedited fashion. To date, the Army has developed an approach to solicit ideas from industry and demonstrate the proposed technologies in the network integration evaluation. However, the Army is still formulating its proposed approach for funding and rapidly procuring the more promising technologies.  Development of the Joint Tactical Radio System ground mobile radio, a software-defined radio that was expected to be a key component of the network has recently been terminated. In a letter to a congressional defense committee explaining the termination, the acting USD/ATL stated that the termination was based on growth in unit procurement costs. He added that it is unlikely that Joint Tactical Radio System ground mobile radio would affordably meet requirements and may not meet some requirements at all. The radio performed poorly during the network integration evaluation and was given a “stop development and do not field” assessment by the test unit. Based on the assessment that a competitive market had emerged with the potential to deliver alternate radios to meet the capability at a reduced cost, the acting USD/ATL also established a new program for an affordable; low-cost; reduced size, weight, and power radio product. At this point, it is not yet clear when and how that program will proceed or how these new radios will be able to fit within the Army’s network strategy.  The Army plans for the future tactical network to feature the use of the wideband networking and soldier radio waveforms and, in our March 2011 testimony, we reported that the Army has had trouble maturing these waveforms for several years and they are still not at acceptable levels of maturity. Although both waveforms experienced limited successes during the recent network integration evaluation testing, Army officials indicate that the wideband networking waveform continues to be very complex, and not fully understood, and there may be substantial risk maturing it to its full capability requirement. With the termination of the ground mobile radio, it is unclear how waveform maturation will continue.  Although the network integration kit—expected to be a fundamental part of the Army’s information network—was found to have marginal performance, poor reliability, and limited utility, the USD/ATL approved procurement of one additional brigade set of network integration kits. The decision made potential fielding of the kits— radios, waveforms, integrated computer system, and software— contingent on user testing that successfully demonstrates that it can improve current force capabilities. The network integration kit again performed poorly during the recent network integration evaluation and received a “stop development and do not field” assessment. Army network officials have indicated that a senior Army leadership memorandum will be forthcoming that will cancel further network integration kit development and fielding. Earlier, the Army concluded that the network integration kit was not a long-term, viable, and affordable solution. To reduce risk in the JLTV program, the Army and Marine Corps entered a technology development phase with multiple vendors to help increase their knowledge of the needed technologies, determine the technologies’ maturity level, and determine which combination of requirements were achievable. The contractors delivered prototype vehicles in May 2010 and testing to evaluate the technical risks in meeting the proposed requirements, among other things, was completed on the vehicles in June 2011. Because of the knowledge gained through the technology development phase, the services have worked together to identify trades in requirements to reduce weight and to drive down the cost of the vehicle. A different outcome may have resulted if the services had proceeded directly to the engineering and manufacturing development phase, as had been considered earlier. Based on the technology development results, the services concluded that the original JLTV requirements were not achievable and its cost would be too high. For example, the services found that JLTV could not achieve both protection levels and transportability, with weight being the issue. As a result, the services have adjusted the JLTV transportability requirement to a more achievable level and the Army and Marine Corps have decided that they would rely on HMWWVs for other missions initially intended for JLTV. In fact, the Army has chosen to proceed with even higher protection levels than planned earlier for JLTV. The Army now plans to have protection levels equal to the M-ATV, including underbody protection, while the Marine Corps will continue with the original protection level, similar to the MRAP family of vehicles except for the underbody protection, but plans to conduct more off-road operations to avoid mines and roadside bombs. As for armor protection, the services have found that development of lightweight, yet robust armor has not proceeded as rapidly as hoped and production costs for these new technologies are significantly higher than for traditional armor. The services have established an average procurement cost target of $350,000. A key component of the average procurement cost is the average manufacturing unit cost which includes the cost of labor, materials, and overhead to produce and assemble the product. Achieving the average procurement cost target of $350,000 would require an average manufacturing unit cost of $250,000 to $275,000. While one recent technology development projection of a fully armored JLTV average procurement cost exceeded $600,000, the program office now estimates that, by implementing requirements trades and the cost savings from those trades, industry can meet the average manufacturing unit cost and average procurement cost targets. Nevertheless, meeting the JLTV cost targets will be a challenge and will also likely depend on what type of contract the services award. The services’ current JTLV plan is to award a multiyear procurement contract with sizable annual quantities, once a stable design is achieved. Originally, the services planned to follow a traditional acquisition approach for JLTV and enter the engineering and manufacturing development phase in January 2012. According to the Army program manager for light tactical vehicles, the services now plan to use a modified MRAP acquisition model in which industry would be asked to build a set of vehicles that would subsequently be extensively tested prior to a production decision. The Army has stated that industry had demonstrated several competitive prototypes whose performance and cost has been verified and believes that industry can respond with testable prototypes within about 1 year. Many details of the new strategy have yet to be worked out but a milestone B review is anticipated in April 2012. While this approach is seen as saving time and money, it will forgo the detailed design maturation and development testing process typically done early in the engineering and manufacturing development phase. A key risk is the potential for discovering late that the vehicles are still not mature. Both the Army and the Marine Corps have articulated a significant role for the Up-Armored HMMWV in combat, combat support, and combat service support roles beyond fiscal year 2025 but their fleets are experiencing reduced automotive performance, loss of transportability, higher operation and sustainment costs, and the need for better protection as the threats have evolved. The Army plans to recapitalize a portion of its Up-Armored HMMWV fleets by establishing requirements, seeking solutions from industry through full and open competition, and testing multiple prototype vehicles before awarding a single production contract. The Army’s emerging effort—the Modernized Expanded Capacity Vehicle program— aims to modernize vehicles to increase automotive performance, regain mobility, extend service life by 15 years, and improve blast protection. The initial increment of recapitalized vehicles for the Army is expected to be about 5,700, but depending on the availability of funds, the quantity for the Army could increase. The Army plans a two-phased acquisition strategy for recapitalizing the Up-Armored HMMWV that includes awarding contracts to up to three vendors for prototype vehicles for testing and a production contract to a single vendor. The production decision is scheduled for late fiscal year 2013. The Army is anticipating a manufacturing cost of $180,000 per vehicle, not including armor, based on the cost performance of similar work on other tactical platforms managed by the Army. According to the Marine Corps developers, the Marine Corps has concluded a recapitalized HMMWV will not meet requirements for is fleet of 5,000 light combat vehicles. However, it will conduct research to find the most effective way to sustain the balance of the fleet—about 14,000 vehicles—until 2030. The Marine Corps plans to leverage components and subsystems from the Army-sponsored HMMWV recapitalization program. Detailed information on this effort is not currently available. Marine Corps and Army officials have said they intend to cooperate on the recapitalization effort and are sharing information on their individual plans to help maximize value for the available funding. As the services proceed to implement their new JLTV and HMMWV strategies, they have identified a point in fiscal year 2015 (see fig. 2) where a decision will be made on whether to pursue JLTV only or both programs. By then, the technology and cost risks of both efforts should be better understood. The Army continues to struggle to define and implement a variety of modernization initiatives since the Future Combat System program was terminated in 2009. The most recent example of this is the termination of the ground mobile radio, which will require the Army to develop new plans for relaying information to the soldier. The pending reductions in the defense budgets are having a significant impact on Army acquisition programs and the Army is already reprioritizing its combat vehicle investments. As plans for GCV move forward, it will be important for DOD, the Army, and the Congress to focus attention on what GCV will deliver and at what cost and how that compares to other needs within the combat vehicle portfolio. Beyond combat vehicles, DOD and the services will also be facing some tough decisions in the future on the tactical wheeled vehicle programs and the affordability of both the JLTV and the HMWWV recapitalization effort. Over the last few years, the Army has been conducting capability portfolio reviews which have proven to be very helpful in identifying overlaps and setting priorities. The reviews were highlighted in the Army Acquisition Review and have been important in getting the Army to think more broadly and to look beyond the individual program. On both JLTV and GCV, as the requirements have been examined more closely, the services are finding that they can make do with less in terms of capabilities than originally anticipated and projected unit costs have been reduced significantly. It is important that the Army continue to use and improve on its capability portfolio review processes going forward and to consider a broad range of alternatives. Chairman Bartlett, Ranking Member Reyes, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. For future questions about this statement, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include William R. Graveline, Assistant Director; William C. Allbritton; Morgan DelaneyRamaker; Marcus C. Ferguson; Dayna Foster; Danny Owens; Sylvia Schatz; Robert S. Swierczek; Alyssa B. Weir; and Paul Williams. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
After the Army canceled the Future Combat System in June of 2009, it began developing modernization plans, including developing a new Ground Combat Vehicle (GCV) and additional network capability. At the same time, the Army was considering options on how to improve its light tactical vehicles. This statement addresses potential issues related to developing (1) the new GCV, (2) a common information network, and (3) the Joint Light Tactical Vehicle (JLTV) in a constrained budget environment. The statement is based largely on previous GAO work conducted over the last year in response to congressional requests and results of other reviews of Army modernization. To conduct this work, GAO analyzed program documentation, strategies, and test results; interviewed independent experts and Army and Department of Defense (DOD) officials; and witnessed demonstrations of current and emerging network technologies. DOD reviewed the facts contained in this statement and provided technical comments, which were incorporated as appropriate. Delivering a feasible, cost-effective, and executable GCV solution presents a major challenge to the Army, with key questions about the robustness of the analysis of alternatives, the plausibility of its 7-year schedule, and cost and affordability. DOD and the Army have taken steps to increase oversight of the program, but resolving these issues during technology development will remain a challenge. For example, the Army has already reduced some requirements and encouraged contractors to use mature technologies in their proposals, but the 7-year schedule remains ambitious, and delays would increase development costs. Independent cost estimates have suggested that 9 to 10 years is a more realistic schedule. Over the next 2 years during the technology development phase, the Army faces major challenges in deciding which capabilities to pursue and include in a GCV vehicle design and determine whether the best option is a new vehicle or modifications to a current vehicle. The Army's new information network strategy moves away from a single network development program to an incremental approach with which feasible technologies can be developed, tested, and fielded. The new strategy has noteworthy aspects, such as using periodic field evaluations to assess systems that may provide potential benefit and getting soldier feedback on the equipment being tested. However, the Army has not articulated requirements, incremental objectives, or cost and schedule projections for its new network. It is important that the Army proceed in defining requirements and expected capabilities for the network to avoid the risk of developing individual capabilities that may not work together as a network. With the cancellation last week of its ground mobile radio and continuing problems in developing technology to provide advanced networking capability, the Army will still need to find foundational pieces for its network. The Army is reworking earlier plans to develop and acquire the JLTV and is planning to recapitalize some of its High Mobility, Multipurpose Wheeled Vehicles (HMWWV). These efforts have just begun, however, and their results are not yet assured. To reduce risk in the JLTV program, the services relied on multiple vendors during technology development to increase their knowledge of the needed technologies, determine the technology maturity level, and determine which requirements were achievable. As a result, the services identified trades in requirements to drive down the cost of the vehicle. For example, the services found that JLTV could not achieve both protection level and transportability goals, so the services are accepting a heavier vehicle. A potential risk for the services in allowing industry to build vehicles for testing is that the prototypes may not be mature; the Army will need to keep its options open to changes that may result from these tests. Both the Army and the Marine Corps have articulated a significant future role for their Up-Armored HMMWV fleets, yet the fleets are experiencing reduced automotive performance, the need for better protection as threats have evolved, and other issues. The Army is planning to recapitalize a portion of its Up-Armored HMMWV fleet to increase automotive performance and improve blast protection. The Marine Corps' plans to extend the service life of some of its HMMWVs used in light tactical missions are not yet known. GAO is not making any recommendations with this statement; however, consistent with previous work, this statement underscores the importance of developing sound requirements and focusing up front on what modernization efforts will deliver and at what cost.
During fiscal years 1993-98, the United States funded rule of law programs and related activities in countries throughout the world. Over this period, rule of law assistance totaled at least $970 million. Figure 1 illustrates the worldwide U.S. rule of law funding for fiscal years 1993-98. Over the period, the total annual rule of law funding increased from $128 million to $218 million. Although funding appears to have declined substantially in 1996, this may be largely explained by the fact that USAID could not readily provide rule of law funding information for fiscal year 1996 due to problems with its automated information system. On a regional basis, the Latin America and the Caribbean region received the largest share, with about 36 percent. Africa, Central Europe, and the newly independent states of the former Soviet Union received about 15 percent each. (See table 1.) From fiscal year 1993 to 1998, rule of law funding shifted primarily from the Latin America and the Caribbean region to other regions, mainly Central Europe. Funding for Central Europe grew from about $9 million in fiscal year 1993 to over $67 million in fiscal year 1998, accounting for 31 percent of the worldwide rule of law assistance that year. Over the same period, rule of law assistance in Latin America and the Caribbean declined from about $57 million (44 percent of the worldwide total) to $42 million (19 percent). Rule of law assistance to Africa also declined from $38 million (30 percent of the worldwide total) in 1993 to $29 million (13 percent) in 1998. Figure 2 illustrates these trends; appendix I provides more detailed data. During fiscal years 1993-98, we identified 184 countries that received at least some U.S. rule of law funding. However, over half of this assistance went to just 15 countries. Haiti received the most, primarily in connection with U.S. and international efforts to restore peace and stability to the country after a 1991 coup. Most countries (102 of the 184) received less than $1 million. Table 2 illustrates the top 15 recipients. (App. II provides detailed rule of law funding by region and country for fiscal years 1993-98.) State’s Under Secretary for Global Affairs has overall responsibility for coordinating rule of law programs and activities. At least 35 entities from the departments and agencies have a role in providing U.S. rule of law assistance programs. (See app. III.) Most U.S. rule of law funding is provided through the international affairs appropriations and is transferred or reimbursed to the other departments and agencies, primarily by USAID, but to a lesser extent by State. USAID and the Department of Justice oversaw the implementation of 70 percent, or about $683 million, of all U.S. rule of law assistance programs and activities worldwide during fiscal years 1993-98. USAID focused on improving the capabilities of judges, prosecutors, and public defenders and their respective institutions as well as increasing citizen access to justice. Most of Justice’s rule of law activities were carried out by its International Criminal Investigative Training Assistance Program (ICITAP), which emphasized enhancing the overall police and investigative capabilities of law enforcement organizations. State, the Department of Defense, and USIA accounted for about $258 million, or about 27 percent, of the U.S. worldwide efforts. State’s activities focused on international narcotics and law enforcement and antiterrorist assistance. Defense provided rule of law training to foreign military servicemembers, but most of its rule of law assistance was provided to support its operations in Haiti. USIA focused on increasing the awareness and knowledge of rule of law issues through various educational programs, such as exchanges between host country judicial and law enforcement personnel and their U.S. counterparts. (See fig. 3.) Funding for rule of law programs and related activities was provided primarily through the international affairs appropriations for USAID, State, and USIA. These three entities accounted for more than 91 percent of all rule of law funding, or $884 million, in fiscal years 1993-98. In addition, Defense provided about $58 million (6 percent). Although they provided small amounts of funding, almost all rule of law assistance provided by Justice, the Treasury, and other departments and agencies was funded through interagency transfers and reimbursements from USAID and, to a lesser extent, State. As previously noted, the Latin America and the Caribbean region was the largest recipient of U.S. rule of law assistance in fiscal years 1993-98. As with the overall worldwide rule of law assistance, we identified the funding and recipients and the departments and agencies involved. In addition, we categorized the rule of law assistance provided to the region to help describe what the overall purposes of the assistance were. In fiscal years 1993-98, the United States provided $349 million in rule of law assistance to Latin America and the Caribbean (about 36 percent of the worldwide total). Forty countries in the region received a portion of this assistance, although the funding was concentrated among a few countries. Seven countries accounted for about 76 percent of the total regional funding. Two of the seven—Haiti and El Salvador—accounted for just over 50 percent of the regional total, with $137.9 million and $40.7 million, respectively. (See fig. 4.) Haiti was a special case. The United States provided large amounts of assistance during this period in an attempt to restore order and democracy after a coup in 1991. Nearly one-third of the assistance for Haiti was a $42.6 million, one-time commitment from Defense in 1994 for equipment, supplies, and other support to assist international police monitors and a multinational force. In subsequent years, Haiti continued to be the top recipient of rule of law funds in the region, receiving $35.5 million in fiscal year 1995, $16 million in 1996, and about $15 million in fiscal years 1997 and 1998. Most of this assistance was provided to develop and support a civilian national police force. To help illustrate what rule of law assistance was used for in the Latin America and the Caribbean region, we grouped rule of law assistance into one of six categories based on descriptions provided by the cognizant agencies. Although we placed each program or activity into one primary category, many programs, USAID’s in particular, had multiple purposes that could be identified with more than one category. Figure 5 illustrates the distribution of rule of law assistance by these categories. (App. IV defines the categories we used and provides funding levels by country and category.) The largest rule of law category was assistance for criminal justice and law enforcement. About $199 million—57 percent of the regional total for fiscal years 1993-98—was dedicated to these activities. We included assistance to police, prosecutors, public defenders, and other host country agencies (such as customs) that take on law enforcement functions, as well as antinarcotics and antiterrorism assistance, in this category. Almost every country in the region that received rule of law assistance had some criminal justice and law enforcement funding. Haiti received the largest amount of such assistance—$72.5 million. Other major recipients were El Salvador ($25.9 million), Colombia ($19.9 million), Panama ($11.2 million), and Bolivia ($9.8 million). Virtually all of the assistance provided through Justice and most of the funding provided by USAID and State was in this category. Assistance for judicial and court operations was the second largest category, comprising $74.2 million (21 percent of the regional total). USAID provided 88 percent of the funding. Assistance for civil government and military reform was the third largest category—$47.6 million (13.6 percent). We included assistance for governmental entities other than the courts and criminal justice and law enforcement systems in this category. The largest single element was $42.6 million provided by Defense to Haiti in 1994. In addition, we included most of the military service training on topics such as civil-military relations and professional skills for maritime and military personnel. Much less funding was devoted to the other categories—$22 million for democracy and human rights, $5.3 million for general and other activities, and $1.1 million for law reform. In the category of democracy and human rights, we included civic education activities, as well as some efforts that focused specifically on human rights, citizen participation, and related topics. In the general/other category, we included most of the legal education grants provided by USIA, as well as assistance on various topics such as intellectual property rights and drug education and rehabilitation. To determine how much U.S. rule of law assistance was provided worldwide in fiscal years 1993-98, and to identify the U.S. departments and agencies involved, we reviewed program documentation and interviewed officials at the Department of State, USAID, the Department of Defense, and USIA—the principal sources of funding for U.S. rule of law programs. These officials identified other departments and agencies with rule of law activities. We asked officials from each of these entities to provide funding and descriptive information for its activities over the period. However, most of these departments and agencies did not have rule of law funding information readily available and had to initiate ad hoc efforts to compile data addressing our questions. Further complicating this effort was the fact that the departments and agencies did not have a commonly accepted definition of what constituted rule of law activities. Therefore, we relied on each department and agency (and the bureaus and offices within those entities) to provide us information on the programs and activities it considered rule of law. In some instances, programs with an apparent rule of law element were not included. For example, USAID did not include all of its assistance for human rights, and State did not include all of its antinarcotics assistance. Additionally, the funding data is a mix of obligated amounts and actual expenditures. For agencies (primarily USAID) that provided rule of law assistance over several years, obligation data better reflected the magnitude of the funding involved because actual expenditures (or requests for reimbursement) may not be reported until subsequent years. However, other rule of law assistance provided, for example by law enforcement agencies, was relatively low-cost, short-term training or exchange programs. In this instance, obligations and actual expenditures were virtually synonymous. Therefore, we used actual expenditures. Because of the volume of data—almost 4,600 program and activity records—and the lack of documentation in some agencies, we did not independently verify the accuracy of the data provided. Some agencies could not provide data for the entire period—fiscal years 1993-98—or lacked funding amounts for some identified rule of law activities. USAID’s automated information system could not provide worldwide data for fiscal year 1996. The system was upgraded that year, and 1996 information was not captured in the new system nor was it available in USAID’s prior system. At our request, USAID polled each of its missions in Latin America and the Caribbean to obtain rule of law funding data, including fiscal year 1996; however, because of the magnitude of the effort, we did not request that USAID do the same for the other regions of the world. To help mitigate this limitation, we used information from other agencies indicating USAID rule of law funding for 1996. However, this information likely understates USAID’s assistance levels to regions other than Latin America and the Caribbean for the year. State’s Bureau of International Narcotics and Law Enforcement Affairs provided us funding information for fiscal years 1997 and part of 1998. Essentially, this office transferred rule of law funds to U.S. law enforcement and related agencies to assist their foreign counterparts. Therefore, for the other years, we relied on the U.S. recipients of this funding to report the amount of rule of law funding provided by the Bureau. In addition, for many agencies, the fiscal year 1998 data provided to us was compiled before the fiscal year data had been finalized and may be incomplete. However, with the exception of not having complete USAID funding information for fiscal year 1996, we believe the funding levels for the other departments and agencies generally reflect their rule of law activities. We performed our work from June 1998 to May 1999 in accordance with generally accepted government auditing standards. The Departments of Commerce, Defense, Justice, State, and the Treasury; USAID; and USIA commented on a draft of this report. Defense and USAID provided written comments (see apps. V and VI); the others provided oral comments. USAID also provided its definition of rule of law. All of the agencies concurred with the report; some provided technical comments that have been incorporated, as appropriate. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after its issue date. At that time, we will send copies of this report to the Honorable Madeleine K. Albright, the Secretary of State; the Honorable William S. Cohen, the Secretary of Defense; the Honorable Robert E. Rubin, the Secretary of the Treasury; the Honorable William M. Daley, the Secretary of Commerce; the Honorable J. Brian Atwood, the Administrator of USAID; the Honorable Penn Kemble, the Acting Director of USIA; and interested congressional committees. We will make copies available to others upon request. Please contact me at (202) 512-4128 if you or your staff have any questions about this report. Key contributors to this report are listed in appendix VII. Worldwide U.S. rule of law assistance grew from about $128 million in fiscal year 1993 to about $218 million in fiscal year 1998. The growth was not uniform across the geographic regions, with Central Europe increasing from about $8 million to over $67 million during the period—supplanting the Latin America and the Caribbean region as the leading recipient of rule of law assistance. Table I.1 shows rule of law assistance by region for fiscal years 1993-98. We used “multiregional” for rule of law assistance provided to several countries in two or more regions or when such assistance was not broken out by recipient countries. In fiscal years 1993-98, the United States provided at least some rule of law assistance to 184 countries. The assistance ranged from multiyear institutional development programs to one-time, short-term training for police or other law enforcement personnel. Table II.1 shows the dollar value of the rule of law assistance provided to all the countries we identified as receiving some assistance. In some cases, the assistance was not identified with a specific country or was provided to countries in multiple regions—such assistance is identified as “regional” or “multiregional,” respectively. Table II.1: U.S. Rule of Law Funding by Region and Country, Fiscal Years 1993-98 Congo (Brazzaville) Congo (Kinshasa) In compiling the rule of law assistance data for this report, we identified 7 cabinet-level departments and 28 related agencies, bureaus, and offices involved in providing rule of law assistance. Many are law enforcement agencies providing training and technical assistance to their counterparts overseas. These are listed below. International Trade Administration National Telecommunications and Information Administration Office of General Counsel, Commercial Law Development Program U.S. Patent and Trademark Office U.S. Air Force U.S. Army U.S. Marine Corps U.S. Navy Drug Enforcement Administration Federal Bureau of Investigation Immigration and Naturalization Service Criminal Division International Criminal Investigative Training Assistance Program Office of Overseas Prosecutorial Development, Assistance and Bureau of Diplomatic Security, Office of Antiterrorism Assistance Bureau of International Narcotics and Law Enforcement Affairs Bureau of Western Hemisphere Affairs (formerly Bureau of Inter-American Affairs) Bureau of Alcohol, Tobacco and Firearms Office of International Affairs Office of Investigations Federal Law Enforcement Training Center Financial Crimes Enforcement Network Internal Revenue Service U.S. Secret Service U.S. Agency for International Development (USAID) U.S. Information Agency (USIA) To develop an overview of the types of activities being funded for the Latin America and the Caribbean region, we grouped the U.S. rule of law assistance program data for the region into six categories based on activity descriptions provided by the cognizant departments and agencies. Although we placed each program or activity into one primary category, many programs, USAID’s in particular, had multiple purposes that could be identified with more than one category. The following definition for each category we used and the types of activities we included. Criminal Justice and Law Enforcement: Assistance to help criminal justice or law enforcement organizations make reforms or improve their capabilities to carry out their responsibilities in a professional and competent manner. We included technical assistance and training for police, prosecutors, public defenders, and other personnel in law enforcement-related agencies (such as Customs) in this category. Assistance for police often focused on investigative capabilities and management improvements. Technical assistance and training topics included detection and identification of firearms, development of criminal investigation units, maritime law enforcement, and detection of counterfeit currency. We also included antinarcotics and antiterrorism assistance. Judicial and Court Operations: Assistance to help reform or improve operations of judicial and court systems. We included activities that focused on modernizing court administration, training in oral advocacy skills, training judicial personnel, and establishing procedures for judge selection and a career ladder for judges. In addition, we included programs intended to improve access to the justice system and establish legal aid services and justice centers; to institute alternative dispute resolution, mediation, or arbitration procedures in various sectors; and to provide exchange opportunities, training, or research related to the judicial or legal system in general. Civil Government and Military Reform: Assistance to help promote reform in other than judicial and law enforcement government agencies, improve cooperation and understanding between civil and military agencies, or develop responsive or responsible government institutions and officials. The majority of the activities were training courses provided by the military services on topics such as civil-military relations, professional skills for maritime and military personnel, and military law, although the largest single item was the funding to support multinational forces and police monitors in Haiti. We also included training and related programs on government ethics and corruption in this category. Democracy and Human Rights: Assistance to promote democracy, electoral reforms, or respect for human rights. We included USAID human rights activities and many USIA-funded activities that focused on civic education, citizen participation, free press, and related topics in this category. General/Other Activities: Assistance that did not fit into other categories or was not clearly described. We included legal education grants provided by USIA and training or exchange programs on an assortment of topics such as intellectual property rights, drug education and rehabilitation, and domestic and gender violence. In addition, we included assistance that had no description. Law Reform: Assistance to help develop, document, or revise constitutions, laws, codes, regulations, or other guidance that institute and strengthen the rule of law. We included activities primarily focused on law reform, including judicial or criminal procedures code reforms. However, some law reform activities may be included in other categories as a component of a larger program—especially USAID programs that had multiple goals. Table IV.1 illustrates the distribution of the rule of law assistance by the categories we developed among the countries in Latin America and the Caribbean. Well over half of all U.S. rule of law assistance to the region was technical assistance and training for criminal justice and law enforcement personnel—police, prosecutors, public defenders, and others. In addition to those named above, Ann L. Baker, Mark B. Dowling, Marcelo Fava, Wyley Neal, and Richard Seldin made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on U.S. rule of law assistance programs and activities, focusing on the: (1) amount of U.S. rule of law funding provided worldwide in fiscal years 1993-1998; and (2) U.S. Departments and agencies involved in providing rule of law assistance. GAO noted that: (1) based on the funding data cognizant Departments and agencies made available, during fiscal years 1993-1998, the United States provided at least $970 million in rule of law assistance to countries throughout the world; (2) the Latin America and the Caribbean region was the largest recipient of U.S. rule of law assistance over the period, accounting for $349 million, or more than one-third of the total assistance; (3) in recent years, Central European countries received an increasingly larger share and, in 1998, Central Europe was the largest regional recipient, accounting for about one-third of all rule of law assistance; (4) the United States provided at least some assistance to 184 countries--ranging from $138 million for Haiti to $2,000 for Burkina Faso; (5) while most countries received less than $1 million, 15 countries, including 7 in Latin America and the Caribbean, accounted for just over half of the total funding; (6) at least 35 entities from various U.S. Departments and agencies have a role in the U.S. rule of law assistance programs; (7) the Departments of State and Justice and the Agency for International Development are the principal organizations providing rule of law training, technical advice, and related assistance; (8) the Department of Defense, the U.S. Information Agency, numerous law enforcement agencies and bureaus, and other U.S. Departments and agencies also have a direct role; (9) 40 countries in the Latin America and the Caribbean region received some rule of law assistance; (10) more than three-fourths of the $349 million in assistance was provided to seven countries; (11) Haiti received nearly $138 million, or about 40 percent of the regional total, largely in connection with U.S. and international efforts to restore order and democracy after a September 1991 military coup; (12) six other countries in the region--ranging from about $41 million for El Salvador to $12 million for Panama--accounted for about $127 million, or nearly 37 percent of the regional total; (13) most of the rule of law assistance for Latin America and the Caribbean was provided to help the countries reform their criminal justice or law enforcement organizations, including training and technical assistance for prosecutors, public defenders, police officers, and investigators; and (14) a substantial amount was also dedicated to improving court operations, including modernizing court administration and enhancing public access to the judicial system.
Federal agencies, including DHS and its components, have discretion to place employees on administrative leave in appropriate circumstances and for an appropriate length of time. Administrative leave is an excused absence without loss of pay or charge to another type of leave. In the absence of statutory authority to promulgate regulations addressing administrative leave by all federal employees, OPM has mentioned this leave in limited contexts in regulations covering other types of leave and excused absences for federal employees. OPM has provided additional guidance to federal agencies on administrative leave via government- wide memorandums, handbooks, fact sheets, and frequently asked questions. For example, in May 2015, OPM sent a memorandum to federal agencies that described the steps it was taking to address the recommendations from our October 2014 report on administrative leave and that included a fact sheet focused on this type of leave. OPM guidance has acknowledged numerous purposes for which administrative leave is appropriate. To promote equity and consistency across the government, OPM advises that administrative leave be limited to those situations not specifically prohibited by law and satisfying one or more of the following criteria: The absence is directly related to the department or agency’s mission, The absence is officially sponsored or sanctioned by the head of the The absence will clearly enhance the professional development or skills of the employee in his or her current position, or The absence is as brief as possible under the circumstances and is determined to be in the interest of the agency. With respect to administrative leave for personnel matters, OPM states that placing an employee on administrative leave is an immediate, temporary solution for an employee who should be kept away from the worksite. As a general rule, administrative leave should not be used for an extended or indefinite period or on a recurring basis. Specifically, OPM guidance discusses agency use of administrative leave before or after proposing an adverse action against an employee. For example, an agency may place an employee on administrative leave during an investigation prior to proposing an adverse action when the agency believes the employee poses a threat to his own safety or the safety of others, the agency mission, or government systems or property. According to OPM, a federal agency should monitor the situation and move towards longer-term actions when it is possible, appropriate, and prudent to do so. An agency may also place an employee on administrative leave after proposing an adverse action. According to OPM regulations, under ordinary circumstances, an employee whose removal or suspension has been proposed will remain in a duty status in his or her regular position after the employee receives notice of the proposed adverse action. In those rare circumstances after the agency proposes an adverse action when the agency believes the employee’s continued presence in the workplace may pose a threat to the employee or others, result in loss of or damage to government property, or otherwise jeopardize legitimate government interests, the agency may place the employee on administrative leave for such time as is necessary to effect the adverse action. However, OPM strongly recommends agencies consider other options prior to using administrative leave in this scenario. Options include assigning the employee to duties and a location where he or she is not a threat to safety, the agency mission, or government property; allowing the employee to take leave (annual leave, sick leave as appropriate, or leave without pay); or curtailing the advance notice period for the proposed adverse action when the agency can invoke the “crime provision” because it has reasonable cause to believe the employee has committed a crime for which a sentence of imprisonment may be imposed. The Merit Systems Protection Board (MSPB), among other things, adjudicates individual federal employee appeals of agency adverse actions. MSPB has recognized the authority of agencies to place employees on short-term administrative leave while instituting adverse action procedures. MSPB has also ruled that placing an employee on administrative leave is not subject to procedural due process requirements and is not an appealable agency action. This is in contrast to adverse actions, such as removals or suspensions of more than 14 days, including indefinite suspensions, which require procedural due process (such as 30 days advance notice), and are subject to appeal and reversal by MSPB where agencies fail to follow such due process procedures. Similarly, where an agency bars an employee from duty for more than 14 days, requiring that employee to involuntarily use his or her own leave, such agency actions are also subject to appeal. A federal employee may obtain judicial review of a final MSPB decision with the United States Court of Appeals for the Federal Circuit by filing a petition for review within 60 days after the Board issues notice of its final action. Between fiscal years 2011 and 2015, DHS placed 116 employees on administrative leave for personnel matters for 1 year or more, with a total estimated salary cost of $19.8 million during the same period, as shown in table 1. DHS placed the majority of these employees (69 employees or 59 percent) on administrative leave for matters related to misconduct allegations, according to DHS data. For example, as of September 30, 2015, a law enforcement agent at a DHS component had been on administrative leave for over 3 years while under investigation for allegations of criminal and administrative misconduct. These allegations raised concerns about the protection of government resources and precluded him from working as a law enforcement agent, according to the component. While on administrative leave, the employee received an estimated $455,000 in salary and benefits, according to DHS. DHS also placed employees on administrative leave for personnel matters involving fitness for duty and security clearances. Of the 116 DHS employees on administrative leave for at least 1 year between fiscal years 2011 through 2015, 28 employees (24 percent) faced matters related to fitness for duty and 19 employees (or 16 percent) faced matters related to security clearances. For example, a component placed an employee on administrative leave because of concerns regarding his personal conduct and his handling of protected information. After proposing revocation of his security clearance and allowing the employee time to respond, the agency revoked the employee’s security clearance. The employee’s position required a security clearance, and the employee remained on administrative leave while he exhausted the agency’s appeal process for revocation of his security clearance. Ultimately, after almost 18 months on administrative leave with an estimated salary cost of over $160,000, the employee was removed from the agency. As shown in table 1, CBP had the most employees placed on administrative leave for 1 year or more between fiscal years 2011 and 2015 (52 employees or 45 percent of the 116 DHS employees). The estimated salary cost for these employees for the same period was $8.9 million, according to DHS. DHS reported the current status, as of the end of fiscal year 2015, of the employees that had been on administrative leave for more than one year as one of four options: returned to duty, on indefinite suspension, separated, and on administrative leave. Prior to proposing an adverse action, such as suspension or removal, an agency often conducts an investigation. If the agency determines that for safety or security reasons the employee cannot stay in the workplace while the investigation is being conducted, the agency may put the employee on administrative leave until it has sufficient evidence to support a proposed adverse action. If the agency cannot gather sufficient evidence, the agency may need to return the employee to duty. For example, on the basis of allegations of misconduct, a component placed an employee on administrative leave. The employee remained on administrative leave—for over 3 years with an estimated salary cost of over $340,000—while the component conducted an investigation into the allegations of misconduct, according to DHS. Ultimately, the employee was returned to duty after the component determined that it had insufficient evidence to remove the employee or to put him on indefinite suspension. Table 2 shows, as of September 30, 2015, the status of the 116 DHS employees who had been on administrative leave for at least 1 year between fiscal years 2011 and 2015. Specifically, DHS ultimately returned to duty 32 employees (28 percent), separated from the agency more than half (59 percent) of the employees, and put on indefinite suspension 2 employees (2 percent), according to DHS data. As of September 30, 2015, 14 of the 116 employees (12 percent) were still on administrative leave, pending a final outcome, with an estimated salary cost of $2.6 million between fiscal years 2011 and 2015. Several factors can contribute to the length of time an employee is on administrative leave for personnel matters. Factors contributing to the time an employee is on administrative leave include (1) adverse action legal procedural requirements and the length of time needed for completing investigations related to misconduct, fitness for duty, or security clearance issues; (2) limited options other than administrative leave; and (3) agency inefficiencies in resolving administrative leave cases as expeditiously as possible. These factors are described below with examples from the DHS case files we reviewed where the employee was on administrative leave for 1 year or more. Adverse action requirements. It is important to note that an agency cannot take an adverse action, such as suspending for more than 14 days or removing an employee before taking certain procedural steps outlined in law. These procedural steps are described below. Prior to proposing an adverse action, an agency may place an employee on administrative leave in situations when the employee should be kept away from the workplace when the agency believes the employee poses a threat to his or her own safety or the safety of others, to the agency mission, or to government systems or property while an investigation is pending. For example, one DHS employee believed to be involved in alien smuggling and considered a risk was placed on administrative leave while the component collected evidence against the employee. An option could include assigning the employee to duties where he or she is no longer a threat to safety, the agency mission, or government property, if feasible. After proposing an adverse action, agencies are required to provide employees with at least 30 days advance written notice of proposed adverse action (e.g., notice of proposed indefinite suspension, notice of proposed removal), unless there is reasonable cause to believe the employee has committed a crime for which a sentence of imprisonment may be imposed, in which case a shorter notice may be provided. For example, the notice period was shortened to a 7-day notice period for a case in which an employee was indicted for extortion and bribery, among other things. After a proposed removal notice was issued, the employee resigned. In another case, there were two 30-day proposed suspension notice periods because the employee was indefinitely suspended, reinstated, and then indefinitely suspended a second time. Further, during the adverse action process, if new facts come to light it may be necessary to provide additional notification to the employee and provide them the opportunity to reply to that new information that will be considered in the final decision. An employee is also entitled to a reasonable time, but not less than 7 days, to respond to the notice of proposed adverse action orally and in writing and to furnish affidavits and other documentary evidence in support of the answer. In some cases, these responses can take months. For example, in one case the component issued a proposed removal notice in March 2014 because of the employee’s lack of candor under oath. The employee responded in writing and orally over the next few months, raising issues that required clarification by the agency. Ultimately, the removal was finalized in November 2014, nearly 8 months after the original proposal. In addition, an employee or representative may request an extension of time to reply and has a right to review the information that the agency is relying upon. For example, in a case involving an employee accused of aggravated assault, the employee designated an attorney and requested time for the attorney to review the case before responding. Further, the component twice provided the employee with new information and time to respond. The original indefinite suspension proposal was issued in March 2014, but with the addition of the attorney and new evidence introduced, the oral response was not submitted until October 2014. If the employee wishes for an agency to consider any medical condition that may contribute to a conduct, performance, or leave problem, the employee must be given a reasonable time to furnish medical documentation. The agency may, if authorized, require a medical examination, or otherwise, at its option, offer a medical examination. For example, in a case that took more than 20 months to resolve, the component ordered the employee to take a fitness-for-duty exam in July 2012, after the employee exhibited hostile behavior at work. Over the course of the next 20 months, the employee received a general exam and two psychiatric exams. During this time, the employee remained on administrative leave while exams were rescheduled, physicians requested additional information, and there was miscommunication regarding medical records. In March 2014, the component determined that, according to the medical evidence, the employee was a threat to others and not able to safely perform his duties. The component ultimately removed the employee in September 2014. Conducting investigations and collecting evidence to make adverse action determination. Investigations into allegations of employee misconduct may be extensive, potentially involving multiple interviews over a lengthy period of time, or require investigations by third parties. Component officials indicated where parallel criminal investigations are ongoing by a third party, such as the Federal Bureau of Investigations, U.S. Attorney’s Office, Department of Justice Office of Public Integrity, or the DHS OIG, the investigation may be lengthy and the component may be limited in its ability to conduct its own investigation because it may be precluded from obtaining documents and interviewing witnesses as that may interfere with the criminal investigation. For example, in a particularly long and complex misconduct investigation, component officials said the third-party investigation (by the DHS OIG) took over 2 years to complete, including over 50 interviews conducted abroad. However, as DHS and component officials noted, well-documented investigations are vital for ensuring adverse action decisions are properly supported, as officials are cautious to avoid liability in subsequent proceedings from an appealable decision that may result in an award of back pay and attorney’s fees, which can be as much as three times or more the cost of employee back pay. For example, in one case involving an employee who had been removed for knowingly hiring an undocumented alien, the employee appealed the component’s decision to the MSPB. The MSPB reversed the removal decision, finding that the deciding official’s consideration of the employee’s conviction as grounds for removal without first notifying her of the significance that he attached to her criminal status was a due process violation. The MSPB ordered the component to retroactively restore pay and benefits to the employee. Components have also withdrawn adverse actions in response to MSPB decisions. For example, after the MSPB handed down several decisions regarding indefinite suspensions based on security clearance investigations, DHS component officials rescinded the indefinite suspensions for two similar cases and returned the employees to administrative leave in order to reevaluate its procedures for these cases. Limited options other than administrative leave. In certain situations, management officials may have limited alternative options to administrative leave. The DHS policy and OPM guidance note that agencies should consider options other than administrative leave, such as assigning the employee to alternative work arrangements or duties where he or she is no longer a threat to safety or government property. According to DHS, telework is an alternative option to administrative leave. However, if an employee engages in alleged misconduct involving the misuse of government equipment, telework is not likely an alternative option as the individual would have access to the same government equipment and systems that they have allegedly misused. In this case, the only alternative is placing the individual on administrative leave. Also, DHS and component officials noted that reassignment to another position is not always feasible or viable, depending on other circumstances. For example, the U.S. Secret Service requires all of its employees to maintain a top secret security clearance, so if an employee’s clearance is suspended pending an investigation, there are no alternative duties or positions to assign the employee to until the investigation is complete and a final decision is made. Potentially inefficient agency procedures. Inefficient procedures may also in some cases contribute to the extended use of administrative leave. While the facts and circumstances of each case are unique and management is faced with difficult decisions regarding appropriate actions to take in situations involving the use of administrative leave, our review of DHS case files identified examples where inefficient procedures may have contributed to the length of time the employee was on administrative leave. For example, at one DHS component, resolution of a case was delayed for months when the designated proposing and deciding officials—who are the officials responsible for proposing and making the decision on the adverse action regarding the employee—left their positions and the agency did not designate new officials in a timely manner. During this time, the employee remained on administrative leave. Filling the positions and allowing for replacements to become familiar with the case added time to resolve the case, according to agency officials. The component has since revised its procedures to allow flexibility in terms of who serves in those roles. In another case, an employee’s top secret security clearance was suspended based on concerns about the employee’s behavior and the employee was placed on administrative leave in December 2011. However, a mandatory physical examination to establish the employee’s fitness for duty was not scheduled for this employee until May 2012. In another case, it was almost 5 months from the notice of proposed removal to the final decision, although the component already had medical documentation the employee was unable to perform his job. In September 2015, DHS issued a policy on the proper use of administrative leave across the department. Prior to its issuance, the department did not have a policy or guidance regarding the proper use of administrative leave. Instead, components had their own approach to managing administrative leave, and policies and procedures varied across the components in terms of oversight, approvals, and tracking. According to DHS officials, they issued this policy to help ensure proper and limited use of administrative leave across the department, consistent with OPM guidance. Component officials said they would modify their policies and procedures as necessary to ensure compliance with the requirements of the DHS policy. Key provisions in the DHS policy include the following. An emphasis on using administrative leave for short periods of time and only as a last resort for personnel matters. Citing OPM’s guidance on the appropriate use of administrative leave, the policy includes examples of when it is appropriate for a manager to grant administrative leave, such as for dismissal or closure because of severe weather, voting, or blood donations. For personnel matters, such as during an investigation of the employee, the policy states that employees should remain in the workplace unless the employee is believed to pose a risk to him/her self, to others, or to government property, or otherwise jeopardize legitimate government interests. Other management options should then be considered, such as indefinite suspension, if appropriate, with administrative leave as a last resort. Requiring elevated management approval for longer periods of use. Supervisors can approve administrative leave for short periods, consistent with legal authority and relevant guidance. Supervisors are expected to consult with human resources officials and counsel as appropriate. No component may place an employee on administrative leave for more than 30 consecutive days without the approval of the component head or his/her designee. Routine reporting on administrative leave use to component and DHS management for increased visibility. Component heads are to receive quarterly reports on employees who are placed on administrative leave for 320 hours or more and to consider whether administrative leave continues to be warranted. Components are to report quarterly to the DHS Chief Human Capital Officer regarding employees placed on administrative leave for 960 hours (6 months) or more. DHS’s new policy is intended to increase DHS and component awareness regarding the use of administrative leave by requiring elevated management approval and routine reporting to component heads and the DHS Chief Human Capital Officer, among other things, according to DHS officials. However, the policy does not address how DHS will evaluate the effectiveness of the policy in ensuring proper and limited use of administrative leave. Federal internal control standards call for agency management to establish internal control activities to ensure that ongoing monitoring occurs in the course of normal operations and that separate evaluations are conducted to assess effectiveness at a specific time. The standards also note that information on the deficiencies found during ongoing monitoring and evaluations should be communicated within the organization. DHS’s new administrative leave policy provides for routine monitoring by component heads and the DHS Chief Human Capital Officer of administrative leave usage, which should help increase management visibility of the issue. DHS officials said they intend to use the quarterly reports to determine if administrative leave continues to be warranted for those specific cases. However, they acknowledged that conducting evaluations and sharing of evaluation results could help ensure the effectiveness of the policy and procedures across DHS. Evaluations of DHS’s administrative leave policy can help the department identify and share particularly effective component practices for managing administrative leave, such as identifying alternative duties to assign employees instead of placing them on administrative leave. They may also help identify inefficient component processes, such as those we identified, that could increase the length of time an employee spends on administrative leave, allowing DHS to then take steps to address such inefficiencies and their causes. An evaluation may also identify unintended consequences resulting from DHS’s administrative leave policy that monitoring does not capture. For example, an evaluation may find that the reporting aspects of the policy serves as an incentive to suspend or remove an employee before such actions are supported by an investigation, which may cost a component more if the action is successfully appealed. Finally, conducting evaluations of DHS’s administrative leave policy may help ensure DHS’s administrative leave policy and procedures are effective in reducing the use of administrative leave—one of the intended goals of the new policy—and ensuring the use is proper and justified. Administrative leave is a cost to the taxpayer and its use should be managed effectively. While the reporting requirements in DHS’s new administrative leave policy should help increase DHS and component awareness regarding the use of such leave and will allow for regular monitoring, the policy does not require a more comprehensive separate evaluation of the effectiveness of the policy and related procedures. Once the DHS policy and procedures have been in place and administrative leave routinely monitored, a separate evaluation of the policy and procedures can help the department identify and share effective components practices for managing administrative leave as well as make adjustments needed to help ensure proper and limited use of administrative leave across DHS. To ensure that the department’s administrative leave policy is working as intended, we recommend that the Secretary of Homeland Security direct the Chief Human Capital Officer to conduct evaluations of the department’s policy and related procedures to identify successful practices, potential inefficiencies, and necessary policy and procedural adjustments, and to share the evaluation results across the department. We provided a draft of this product to DHS and OPM for their review and comment. DHS provided written comments, which are reproduced in full in appendix II. OPM did not provide written comments. In its comments, DHS concurred with the recommendation in the report and described planned actions to address it. Specifically, DHS stated that it will evaluate the effectiveness of the new administrative leave policy and related procedures, as GAO recommends. Also, DHS noted that an initial review of the administrative leave data from the first quarter of fiscal year 2016 was completed in February 2016, and the review of all fiscal year 2016 data and recommendations concerning administrative leave policy and related procedures will be completed by March 31, 2017. These planned actions, if fully implemented, should address the intent of the recommendation contained in this report. We are sending copies of this report to the Secretary of Homeland Security, the Acting Director of the Office of Personnel Management, and the appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (213) 830-1011 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix III. To present more detailed information on DHS’s use of administrative leave, and to help verify the reliability of the information we obtained from DHS, we analyzed data from the Office of Personnel Management’s (OPM) Enterprise Human Resources Integration (EHRI) system on DHS employees on at least 3 months of administrative leave between fiscal years 2011 and 2014. Fiscal year 2015 data were not available at the time of this report. As shown in table 3, during this period a total of 752 DHS employees were on administrative leave for 3 months or more between fiscal years 2011 and 2014, and 90 of these DHS employees were on this type of leave for 1 year or more during this period. This last number of employees is similar to the 87 employees on administrative leave for at least 1 year between fiscal years 2011 and 2014 reported in the DHS information. In addition to the contact named above, Adam Hoffman (Assistant Director), Juan Tapia-Videla (Analyst-in-Charge), Monica Kelly, Tracey King, David Alexander, Cynthia Grant, and Chris Zbrozek made significant contributions to this report.
Federal agencies have the discretion to authorize administrative leave—an excused absence without loss of pay or charge to leave—for personnel matters, such as when investigating employees for misconduct allegations. In October 2014, GAO reported on the use of administrative leave in the federal government. GAO found that, between fiscal years 2011 and 2013, 263 federal employees were on this type of leave for 1 year or more during this 3-year period. Of these, 71 were DHS employees. GAO was asked to examine DHS's use of administrative leave across directorates, offices, and components (DHS components). This report describes (1) the number of DHS employees who were on administrative leave for 1 year or more for personnel matters from fiscal years 2011 through 2015, (2) the factors that contribute to the length of time employees are on administrative leave, and (3) the extent to which DHS has policies and procedures for managing such leave. GAO used data from DHS and the Office of Personnel Management, reviewed DHS policies and procedures, interviewed DHS officials, and reviewed information on selected cases of DHS employees placed on administrative leave. Cases were selected based on length of leave, reason for using leave, and DHS component, among other things. Between fiscal years 2011 and 2015, 116 Department of Homeland Security (DHS) employees were on administrative leave for personnel matters for 1 year or more, with a total estimated salary cost of $19.8 million for this period. Of these 116 employees on administrative leave: 69 employees (59 percent) were for matters related to misconduct allegations, 28 employees (24 percent) were for matters related to fitness for duty issues, and 19 employees (or 16 percent) were for matters related to security clearance investigations. As of September 30, 2015, DHS reported that of these 116 employees: 68 employees (59 percent) were separated from the agency, 32 employees (28 percent) were back on duty, 2 employees (2 percent) were on indefinite suspension, and 14 employees (12 percent) remained on administrative leave. Several factors can contribute to the length of time an employee is on administrative leave for personnel matters, such as certain legal procedural steps that must be completed before suspending or removing an employee, or time needed for completing investigations. For example, in one particularly long and complex misconduct investigation, an employee was on administrative leave for over 2 years while investigating officials conducted over 50 interviews abroad. In September 2015, DHS issued an administrative leave policy to ensure proper and limited use of administrative leave across the department. The policy clarifies when such leave is proper, elevates the level of management approval needed for longer periods of leave, and requires quarterly reporting of leave use to component heads and the Chief Human Capital Officer. Component policies and procedures varied prior to the DHS policy; however, component officials stated they would make changes needed to comply with the new policy. Federal internal control standards call for agencies to conduct routine monitoring and separate evaluations to ensure agency controls are effective, and to share their results. While the quarterly reports required under DHS's policy provide routine monitoring information, the policy does not address how DHS will evaluate the effectiveness of the policy and related procedures or how DHS will share lessons learned. DHS officials said they plan to learn from reviewing quarterly reports, but agreed evaluations could be valuable in assessing policy effectiveness. Evaluations of DHS's administrative leave policy can help the department identify effective practices for managing administrative leave, as well as agency inefficiencies that increase the time employees spend on such leave. Sharing evaluation results with components may help ensure DHS's administrative leave policy and procedures are effective, and are achieving the intended result of reducing leave use. GAO recommends that DHS evaluate the results of its administrative leave policy and share the evaluation results with the department's components. DHS concurred with the recommendation.
Before highlighting the results of our review of the fiscal year 2003 PARs, I would like to summarize the requirements of the Improper Payments Act. The act requires the head of each agency to annually review all programs and activities that the agency administers and identify all such programs and activities that may be susceptible to significant improper payments. For each program and activity identified, the agency is required to estimate the annual amount of improper payments and submit those estimates to the Congress before March 31 of the following applicable year. The act further requires that for any agency program or activity with estimated improper payments exceeding $10 million, the head of the agency shall provide a report on the actions the agency is taking to reduce those payments. The Improper Payments Act also required the Director of OMB to prescribe guidance to implement its requirements not later than six months after the date of its enactment (Nov. 26, 2002). OMB issued this guidance on May 21, 2003. It states that each agency shall report the results of its improper payment efforts in the Management Discussion and Analysis section of its PAR for fiscal years ending on or after September 30, 2004. In general, the first set of reports required by the guidance is due in November 2004. Significantly, the guidance issued in May 2003, also required that 15 agencies report improper payment information for 46 programs identified in OMB Circular A-11, publicly in their fiscal year 2003 PARs. Section 57 required agencies to include improper payment information for the agencies and programs in their nonpublic budget submissions to OMB, beginning with the fiscal year 2003 budget proposals. According to OMB, the programs were selected primarily because of their large dollar volume ($2 billion dollars or more in outlays). In July 2003, OMB dropped the requirement for information on erroneous payments and eliminated Section 57 requirements for preparing fiscal year 2005 budget submissions. The information previously called for in the circular includes actual and estimated improper payments and rates, targets for reducing the improper payment rates identified, and corrective action plans to reach the targets. If diligently and vigorously implemented, the Improper Payments Act should have a significant impact on the governmentwide improper payments problem. The level of importance each agency, the administration, and the Congress place on the efforts to implement the act will determine its overall effectiveness and the level to which agencies reduce improper payments and ensure that federal funds are used for their intended purposes. As you requested, we reviewed the fiscal year 2003 PARs for the 15 agencies and 46 programs previously cited in OMB Circular A-11, Section 57, to identify the improper payment information contained therein. Table 1 summarizes the improper payment estimates agencies reported in their fiscal year 2003 PARs. Further review of the table shows that the PARs contained improper payment estimates for 31 of the 46 agency programs previously listed in Circular A-11. The reports contained information on agency initiatives to prevent or reduce improper payments for 22 programs and on impediments to improper payment prevention or reduction for 11 programs. Some agencies partially reported required information. Figure 1 presents, by agency program, the level of reporting that we found for the three categories of information you asked about (improper payment amounts; initiatives to prevent improper payments, reduce them, or both; and impediments to preventing or reducing them). As you can see, the level of reporting is literally all over the board. Further, although agencies may have met the reporting requirements for particular programs by addressing them in PARs, in many cases, the information reported was limited to agency plans for future measures that may not come about. In some cases, agencies reported that they had already determined that programs were not susceptible to significant improper payments, despite the fact that the auditor’s reports in the same PARs identified management challenges, or material internal control weaknesses within the programs where the design or operation of an internal control procedure did not reduce, to a relatively low level, the risk that errors, fraud, or noncompliance that would be material to the financial statements may occur and not be detected promptly by employees in the normal course of performing their duties. This situation appears contradictory. Although OMB has required agencies to perform various improper- payment-related identification and corrective action activities for the past three years for these 46 programs, figure 1 shows that only seven agencies reported all of the required elements you asked about—estimated amounts, initiatives taken to reduce improper payments, and impediments to improper payment prevention or reduction—representing only 9 of the 46 programs (20 percent). One of the agencies, for one of its programs, reported estimated improper payment amounts, discussed ongoing collaborative efforts made with and between program partners (such as state agencies) to improve payment accuracy and to share “best practice” information, and further reported that recent legislation weakened the penalties imposed on program partners for high error rates and reduced the incentives offered for lower rates. Another agency reported an improper payment amount for three of its four required programs, reported initiatives such as improving program guidance and training, and addressed impediments such as the lack of available income data needed to verify applicant-provided income information. A third agency reported an estimate for one of its three required programs, and further reported initiatives including promoting and funding data exchanges with program partners, and reported that its principal impediment was the cost of detecting eligibility issues. For 10 of the 46 programs represented in six agencies, the agencies estimated improper payment amounts and initiatives taken to reduce improper payments, but did not address any impediments. For one program, an agency estimated improper payments and discussed initiatives to correct benefit computation errors and beneficiary earnings test improvements. Another is performing annual on-site reviews. One agency reported an improper payment amount for a program and discussed initiatives, such as implementing an automated system to identify coding and billing errors. Other initiatives reported by agencies included conducting recovery audits, collaborating with other federal agencies to identify and recover payments made to ineligible beneficiaries, and issuing policy notices and providing training to agency personnel on program processes. Six agencies reported estimated amounts for 11 programs, but did not discuss initiatives taken to reduce improper payments and impediments to preventing or reducing improper payments. For three programs, agencies reported no estimated amounts or impediments, but did discuss initiatives taken to reduce improper payments, such as expanding annual post award monitoring and oversight processes. One agency did not report estimated improper payment amounts or discuss initiatives taken to reduce improper payments for one of its programs but identified some of the impediments it has encountered in preventing or reducing them, such as the unavailability of the data necessary to accurately measure improper payments. For 11 of the 46 programs for which agencies were required to report improper payment information in their fiscal year 2003 PARs, four agencies did not report estimated amounts, initiatives taken to reduce improper payments, or impediments to preventing or reducing improper payments, even though OMB Circular A-11, Section 57, originally required agencies to report improper payment data, assessments, and action plans with their initial budget submissions since July 2001. One agency reported, “… erroneous payments are very unlikely … limited to instances of fraud… ” Agencies for several programs reported only that they were continuing to develop improper payment error rates, but reported no further information. In October 2001, we issued an executive guide on strategies to manage improper payments that was based on the results of information that we obtained from public and private sector organizations that identified and took actions designed to reduce improper payments in their programs. We found that the actions that these organizations took shared a common focus of improving the internal control system over problem areas. This system consists of five primary components—the control environment, risk assessments, control activities, information and communications, and monitoring. Internal controls are not one event, but a series of actions and activities that occur throughout an entity’s operations and on an ongoing basis. People make internal controls work, and responsibility for good internal control rests with all managers. One of the biggest hurdles that many entities face in the process of managing improper payments is overcoming the tendency to deny the problem. It is easy to rationalize avoiding or deferring action to address a problem if you do not know how big the problem is. The nature and magnitude of the problem—determined through a systematic risk assessment process—needs to be determined and openly communicated to all relevant parties. When this occurs, especially in a strong control environment, denial is no longer an option, and managers have the information, as well as the incentive, to begin addressing improper payments. Fraud, waste, and abuse in federal activities and programs lead to the loss of billions of dollars of government funds, erode public confidence, and undermine the federal government’s ability to operate effectively. Unfortunately, that assessment comes from a 1985 GAO report on federal agencies implementation of 31 U.S.C. 3512 (c), (d) (commonly referred to as the Federal Managers’ Financial Integrity Act of 1982 (Financial Integrity Act)). Continuing concern over the poor condition of government internal controls and accounting systems led the Congress to pass this legislation that requires, among other things, ongoing evaluations and reports on the adequacy of the systems of internal accounting and administrative control of each executive agency. It requires the head of each agency to issue an annual report that identifies material weaknesses identified through the assessment process and the actions planned to correct those weaknesses. An August 1984 GAO report that summarized the results of our governmentwide review of agencies’ efforts to implement the Financial Integrity Act found that agencies made a good start in the first year of assessing their internal control and accounting systems and have demonstrated a management commitment to implementing the act. Top agency and OMB managers were becoming involved. The report characterized the first-year effort as a learning experience and noted that much remained to be done to complete the evaluation process and correct the problems identified. Our 1984 review of the material weaknesses identified in the annual reports of 17 major agencies revealed that 16 agencies reported accounting/financial management system 14 agencies reported procurement weaknesses; 13 agencies reported property management weaknesses; 12 agencies reported cash management weaknesses; 12 agencies reported grant, loan, and debt collection management 8 agencies reported eligibility and entitlement weaknesses. We concluded that, since the initial work in implementing the act had been accomplished, agencies needed to develop comprehensive plans to correct the material weaknesses identified. Correction of problems represents the “bottom line” of the act. We further recognized that many of the weaknesses identified were long-standing. They did not develop overnight, and their solutions would not be easy. It would take a sustained, high- priority commitment. In commenting on this report, OMB agreed that a long-term commitment to improving internal control was necessary and that weaknesses identified in the first year must be corrected. “According to the testimony, a good beginning has been made toward implementing the Act. It is clear, however, that much more remains to be done …. This year agencies began the review process. Now, they must improve on the work they did last year and conduct in- depth internal control reviews. Above all, corrective actions must be taken on the deficiencies found.” In our report, we noted that, while the act required agency heads to report material weaknesses in their annual reports, the annual reviews conducted identified significant numbers of less serious internal control weaknesses. For example, although Treasury did not report any additional material weaknesses in its 1984 annual statement, its component bureaus identified 89 weaknesses that they considered material and reported 127 associated corrective actions. According to Treasury’s 1984 annual statement, the bureaus had completed 46 (36 percent) of these 127 corrective actions. Similarly, the military services identified and reported correcting thousands of control weaknesses at lower levels. Army managers, for example, reported correcting 3,600 internal control weaknesses in 1984 that were not considered to be material from an agency perspective. In November 1989 testimony, former Comptroller General Charles A. Bowsher again addressed this issue by noting that based on the results of the internal control assessments and examinations of the systems problems that agencies have reported and that GAO and federal audit organizations have identified in their audit reports, it is evident that the government does not currently have the internal control and accounting systems necessary to effectively operate many of its programs and safeguard its assets; many weaknesses are long-standing and have resulted in billions of dollars of losses and wasteful spending; major government scandals and system breakdowns serve to reinforce the public’s perception that the federal government is poorly managed, with little or no control over its activities; and top-level officials must provide leadership if this situation is to change. In summary, during the 1980s, federal agencies conducted significant numbers of internal control assessments and identified and reported taking corrective actions to eliminate the weaknesses found. Yet, at the end of the decade, controls remained inadequate and these weaknesses resulted in billions in losses and wasteful spending. Significantly, the final item cited by Mr. Bowsher in his 1989 testimony is indicative of a weak control environment. Our past work has shown that the control environment is perhaps the most significant component of internal control to the identification, development, and implementation of activities to reduce improper payments. As pointed out in our executive guide on managing improper payments, without this top-level leadership, the outlook for overall improvements in the governmentwide effort to reduce improper payments is limited. From the early 1990s to the present, additional initiatives called for actions to strengthen internal controls over federal programs and financial management activities. The Chief Financial Officers Act (CFO) of 1990 as expanded by the Government Performance and Results Act of 1993 (GPRA); the Government Management Reform Act of 1994; and the President’s Management Agenda are a few of these initiatives. Our reports that discuss these initiatives may not specifically focus on improper payments and agency efforts to reduce such payments but they do discuss agency internal controls over various programs, activities, or both and actions to identify weaknesses in those controls and to design and implement actions to eliminate those weaknesses. Therefore, there is a direct relationship between agency activities regarding those initiatives and agency actions to implement the Improper Payments Act. In recent testimony before this subcommittee on the fiscal year 2003 U.S. government financial statements, Comptroller General David M. Walker noted that certain material weaknesses in internal control and in selected accounting and reporting practices resulted in conditions that continued to prevent GAO from being able to provide the Congress and American citizens with an opinion as to whether the consolidated financial statements of the U.S. government are fairly stated in conformity with U.S. generally accepted accounting principles. One of these material weaknesses involved improper payments that, based on the limited information available, exceeded $35 billion annually. The testimony noted that without a systematic measurement of the extent of improper payments, federal agency management cannot determine (1) if improper payment problems that require corrective action exist, (2) mitigation strategies and the appropriate amount of investments to reduce them, and (3) the success of efforts implemented to reduce improper payments. GPRA is the centerpiece of a statutory framework that the Congress put in place during the 1990s to help resolve the long-standing management problems that have undermined the federal government’s efficiency and effectiveness and to provide greater accountability for results. GPRA was intended to address several broad purposes, including strengthening the confidence of the American people in their government; improving federal program effectiveness, accountability, and service delivery; and enhancing congressional decision making by providing more objective information on program performance. It has resulted in a great deal of progress in making federal agencies more results oriented, but numerous challenges still exist. Top leadership commitment and sustained attention to achieving results, both within the agencies and at OMB, is essential to GPRA implementation. Top leadership commitment is a characteristic of a positive control environment. This again raises the issue of the adequacy of the control environment at federal agencies. Leadership commitment is important, not only to GPRA implementation, but other management activities and initiatives, including successful implementation of the Improper Payments Act. Our executive guide on managing improper payments identified a positive control environment as perhaps the most significant element critical to the identification, development, and implementation of activities to reduce improper payments. The guide can provide useful information to leaders in formulating and implementing their programs to reduce improper payments. In an October 2003 report on governmentwide efforts to address improper payment problems, we noted that, as part of the President’s Management Agenda, officials at OMB told us that they had met with officials from all relevant agencies to provide assistance and to ensure that agencies (1) understood the requirements set forth in its guidance for implementing the Improper Payments Act, (2) have started to inventory their programs and activities for significant risk of improper payments, (3) understand the risk assessment process, and (4) understand the reporting requirements under the Improper Payments Act. In that report, we concluded that the governmentwide effort to identify and assess the magnitude of improper payments, to take actions to reduce those payments, and to publicly report the results of those efforts is generally in its infancy. We further reported that although OMB Circular A-11 had required 14 CFO Act agencies to report selected improper payment information on 44 programs to OMB beginning with their fiscal year 2003 budget submissions, those agencies had completed risk assessments for only 15 of the programs, despite the Congress’s mandate in 1982 through the Financial Integrity Act that agencies continually assess their internal control systems and report annually on their adequacy. Since the issuance of our October 2003 report, federal agencies have issued their fiscal year 2003 PARs. As I discussed earlier in this testimony, the fiscal year 2003 PARs typically contained limited amounts of improper payment information even for those programs previously cited in Circular A-11 for which a reporting requirement has existed since agency submissions of their fiscal year 2003 budgets to OMB. Our executive guide on managing improper payments recognized that in federal agencies, implementation of a strong system of internal control will likely not be easy or quick and will require strong support and continuous action from the President, the Congress, top-level administration appointees, and agency management officials. Once committed to a plan of action, they must remain steadfast supporters of the end goals and their support must be transparent to all. Agencies must be held accountable for appropriately managing and controlling their programs and safeguarding program assets. OMB must continue to provide direction and support to agency management in the implementation of governmentwide efforts, such as those involving improper payments, and conduct appropriate oversight of federal agency efforts to meet their stewardship and program management responsibilities. It is also critical that the Congress continue its oversight, through public hearings such as this one, to make it clear to agency and OMB officials that efforts to reduce improper payments are expected and that failure to do so is not an option. Since 1982, various legislative and administrative initiatives have focused on and required agency assessments of internal controls over programs and financial management activities. Although these initiatives may not specifically target improper payments, by emphasizing internal controls, they have recognized the important role that internal controls have in ensuring that federal programs achieve their intended results and that federal agencies operate them effectively and efficiently. Given this long- standing emphasis on internal control and the various long-standing requirements to identify and implement actions to correct control system weaknesses identified, it is fair to ask two questions. First, is it reasonable to expect that federal agencies have significant information on the condition of internal controls over their programs and activities? Second, should agencies be able to identify their programs and activities that are susceptible to improper payments and to meet the other requirements established by the Improper Payments Act? Based on the legislative and administrative initiatives over the past 20-plus years, I think that the answer to both is an emphatic yes. Many positive improvements have resulted from the various initiatives related to internal control and financial management over the past 20-plus years. However, I am concerned that we continue to see a trend in agency actions to address internal control problems. Agencies often get off to a good start, but they do not sustain their efforts. Given this history and the unknown and potentially significant magnitude of improper payments governmentwide, it is clear that we are facing a major management challenge in adequately addressing the problem. The needed governmentwide initiatives are in place, they must now be effectively implemented. Key to this effort is the need for a strong control environment that creates a culture of accountability and establishes a positive and supportive attitude toward reducing improper payments. This concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have. For further information, please contact McCoy Williams, Director, Financial Management and Assurance, at (202) 512-9508, or Tom Broderick, Assistant Director, at (202) 512-8705. You can also reach them by e-mail at [email protected] or [email protected]. Individuals making key contributions to this testimony included Bonnie McEwan and Donell Ries. 1. Department of Agriculture 2. Commodity Loan Program 3. National School Lunch and Breakfast 4. Women, Infants, and Children 2. Department of Defense 6. Military Health Benefits 3. Department of Education 7. Student Financial Assistance 4. Department of Health and Human Services 13. Foster Care – Title IV-E 14. State Children’s Insurance Program 15. Child Care and Development Fund 5. Department of Housing and Urban Development 16. Low Income Public Housing 17. Section 8 Tenant Based 18. Section 8 Project Based 19. Community Development Block Grants (Entitlement Grants, States/Small Cities) 6. Department of Labor 21. Federal Employee Compensation Act 22. Workforce Investment Act 7. Department of the Treasury 23. Earned Income Tax Credit 8. Department of Transportation 24. Airport Improvement Program 25. Highway Planning and Construction 26. Federal Transit – Capital Investment Grants 27. Federal Transit – Formula Grants (Continued From Previous Page) 9. Department of Veterans Affairs 29. Dependency and Indemnity Compensation 10. Environmental Protection Agency 32. Clean Water State Revolving Funds 33. Drinking Water State Revolving Funds 11. National Science Foundation 34. Research and Education Grants and Cooperative Agreements 12. Office of Personnel Management 35. Retirement Program (Civil Service Retirement System and Federal Employees’ Retirement System) 36. Federal Employees Health Benefits Program 37. Federal Employees’ Group Life Insurance 13. Railroad Retirement Board 38. Retirement and Survivors Benefits 39. Railroad Unemployment Insurance Benefits 14. Small Business Administration 40. 7(a) Business Loan Program 41. 504 Certified Development Companies 43. Small Business Investment Companies 15. Social Security Administration 44. Old Age and Survivors’ Insurance 46. Supplemental Security Income Program This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Improper Payments Information Act of 2002 requires that agencies annually review all their programs and activities and identify those that may be susceptible to significant improper payments. It further requires those agencies with improper payments exceeding $10 million to provide a report on the actions being taken to reduce those payments. This testimony updates agency progress in implementing the act based on our review of agency fiscal year 2003 Performance and Accountability Reports for the 15 agencies and 46 programs previously cited in Office of Management and Budget Circular A-11, Section 57. It required those agencies and programs to report improper payment information to the Office of Management and Budget beginning with their fiscal year 2003 budget proposals. The areas we addressed were (1) agencies that reported improper payments information and the programs and activities on which that information was based, (2) amounts of improper payments reported, (3) initiatives agencies reported taking to reduce those payments and the results of those initiatives, and (4) impediments to the prevention or reduction of improper payments reported. The fiscal year 2003 Performance and Accountability Reports (PAR) typically contained limited amounts of improper payment information even for those programs previously cited in Circular A-11 for which a reporting requirement has existed for at least three years. The PARs contained improper payment estimates for 31 of the 46 programs listed in Circular A-11. They contained information on agency initiatives to prevent or reduce improper payments for 22 programs and on impediments to improper payment prevention or reduction for 11 programs. Seven of 15 agencies reported on all three categories of information requested (improper payment amounts, initiatives taken to reduce or prevent improper payments, and impediments to improper payment prevention or reduction) for 9 of the 46 programs. For 11 of the 46 programs, the four agencies did not report on any of the three elements. In some cases, agencies reported that they had already determined that their programs were not susceptible to significant improper payments. However, the auditor's reports in the same PARs identified management challenges or material internal control weaknesses within the programs where the design or operation of internal control procedures did not reduce, to a relatively low level, the risk that errors, fraud, or noncompliance that would be material to the financial statements may occur and not be detected promptly by employees in the normal course of performing their duties. Key to the effort of reducing improper payments is the need for a strong control environment, including top leadership commitment and sustained attention to achieving results. Since 1982, various legislative and administrative initiatives have focused on and required agency assessments of internal controls over programs and financial management activities. Although these initiatives may not specifically target improper payments, by emphasizing internal controls, they have recognized the importance of internal controls--including a strong control environment--in ensuring that federal programs achieve their intended results and that federal agencies operate them effectively and efficiently. If diligently and vigorously implemented, the Improper Payments Information Act of 2002 should have a significant impact on the governmentwide improper payments problem. The level of importance each agency, the administration, and the Congress place on the efforts to implement the act will determine its overall effectiveness and the level to which agencies reduce improper payments and ensure that federal funds are used efficiently and for their intended purposes.
This report responds to a request from the Chairman of the Subcommittee on Oversight of the House Committee on Ways and Means that we assess the Internal Revenue Service’s (IRS) performance during the 1999 tax filing season. In April 1999, we testified before the Subcommittee on the interim results of our work. In addition to providing data on various indicators that IRS uses to measure its filing season performance, this report discusses (1) IRS’ telephone service; (2) service provided at IRS walk-in sites; (3) other IRS efforts to assist taxpayers; (4) IRS efforts to reduce Earned Income Credit (EIC) noncompliance; (5) electronic filing; (6) IRS’ implementation of certain tax law changes; and (7) implementation of IRS’ new return and remittance processing system. The last chapter of this report contains our overall conclusions, several recommendations to the Commissioner of Internal Revenue, and IRS’ comments on those recommendations. For most taxpayers, their only contacts with IRS involve the annual filing of their income tax returns. Most taxpayers file their returns between January 1 and April 15, the deadline for filing individual income tax returns. However, a large number of taxpayers get extensions from IRS that allow them to delay filing their returns until as late as October 15. IRS provides various services in an effort to help taxpayers file correct returns. For example, taxpayers can (1) call IRS toll-free to get answers to tax law questions and order tax forms and publications; (2) get information or help in preparing their returns at IRS walk-in sites; (3) get their returns prepared at volunteer tax assistance sites sponsored by IRS; and get information, including answers to tax law questions, through IRS’ Web site. “designed to eliminate barriers, provide incentives, and use competitive market forces to make significant progress toward: (1) the overriding goal of 80 percent of all tax and information returns being filed electronically by the year 2007, and (2) the interim goal that, to the extent practicable, all returns prepared electronically should be filed electronically for taxable years beginning after 2001.” The 80-percent goal cited in IRS’ plan derives from a requirement in the IRS Restructuring and Reform Act of 1998. One benefit of electronic filing is that IRS can bypass its labor-intensive and error-prone paper processing system. For the 1999 filing season, IRS introduced a new processing system called the Integrated Submission and Remittance Processing System. The return and remittance processing systems that the new system replaced were old and could not be made year 2000 compliant. Several new tax credits and deductions took effect in tax year 1998 (the tax year for which returns are filed in 1999). Those new credits and deductions included a maximum $400 tax credit for each qualifying child, an additional child tax credit that was designed to benefit taxpayers with three or more children, and various education-related deductions and credits. One tax credit that has been around for several years is the EIC, which is a refundable tax credit established by Congress in 1975 to offset the impact of Social Security taxes and to encourage low-income workers to seek employment rather than welfare. Because of concerns about significant levels of noncompliance associated with the EIC, Congress, in fiscal year 1998, began appropriating funds to IRS that were specifically targeted at EIC noncompliance. With those funds, IRS has initiated various assistance and enforcement efforts focused on reducing that noncompliance. Also, in 1999, IRS implemented new procedures, as mandated by the Taxpayer Relief Act of 1997 (TRA97), that require certain taxpayers to document their eligibility for the EIC before IRS approves their claim. Our objective was to assess IRS’ performance during the 1999 filing season, with particular emphasis on several areas identified in the Subcommittee’s request. To achieve our objective, we analyzed filing-season data from various IRS management information systems, such as the Management Information System for Top Level Executives; IRS data on processing errors, including errors involving the EIC and the child tax credit; and data on IRS’ toll-free telephone assistance and IRS’ Web site; obtained data on IRS’ goals and accomplishments for various performance measures and discussed the methodology for computing many of those measures with cognizant officials; assessed IRS’ methodology for measuring the quality of assistance provided taxpayers who call IRS with tax law questions and analyzed the results of that methodology; interviewed officials who were responsible for managing IRS’ toll-free telephone operations as well as officials at telephone call sites in Atlanta, GA, and Fresno, CA, and analyzed the results of toll-free telephone service customer satisfaction surveys; interviewed officials at IRS walk-in assistance sites in the Georgia and Northern California District Offices; observed walk-in services provided at shopping centers, grocery stores, and mobile van sites by the Georgia, Northern California, and Central California District Offices; and analyzed the results of walk-in customer satisfaction surveys; interviewed IRS National Office officials about the Taxpayer Education Program, with an emphasis on volunteer tax preparation services that are supported by IRS; interviewed officials in IRS’ Office of Electronic Tax Administration about various initiatives undertaken in 1999 in an effort to increase the use of electronic filing, reviewed data on taxpayer participation in the initiatives, and reviewed data on the results of those initiatives; interviewed officials in IRS’ EIC Project Office and in the Atlanta, Fresno, and Kansas City, MO, Service Centers about various efforts to improve the level of compliance associated with EIC claims; analyzed data on the results of those efforts; and interviewed National Office officials located at the Brookhaven, NY, Service Center who were responsible for national EIC compliance efforts; reviewed IRS’ returns processing guidance relating to the child tax credit, interviewed service center officials about their processing procedures, and discussed potential changes to IRS’ forms and instructions with officials at IRS’ National Office; obtained filing-season information from the largest national tax return reviewed reports issued by the Treasury Inspector General for Tax Administration on filing-season activities. We did our work at IRS’ National Office; the Atlanta, Brookhaven, Fresno, and Kansas City Service Centers; the Customer Service Center in Atlanta; and the Georgia and Northern California District Offices. We requested comments on a draft of this report from the Commissioner of Internal Revenue. IRS provided comments in a letter dated December 3, 1999, and at a related meeting on the same date. We have incorporated IRS’ comments as appropriate and have reprinted the letter in appendix II. We did our work from November 1998 through October 1999 in accordance with generally accepted government auditing standards. IRS uses various measures to gauge its performance during a filing season. Those measures relate to timeliness, such as the number of days needed to process and issue refunds; quality, such as the accuracy of notices sent to taxpayers and answers to taxpayers’ questions; and service accessibility, such as the extent to which taxpayers with tax-related questions were able to reach IRS by telephone. In 1999, according to IRS’ own data, it met or exceeded its performance goals for five measures, came close to its goals for two measures, and fell short of its goals for two measures (i.e., quality of responses to tax law questions and level of access to the taxpayer service telephone system).As shown in table 2.1, there were four other measures for which IRS had no goal in 1999. IRS’ accomplishment in one of those areas, that is, timeliness of refunds for paper returns, raised some questions that IRS did not have the necessary data to answer. From 1989 to 1998, IRS measured the quality of assistance to taxpayers who called with tax law questions by making structured test calls to IRS assistors. This method was phased out after the 1998 filing season. During the 1998 filing season, besides making test calls, IRS also measured the quality of its tax law assistance by monitoring a sample of actual calls. In monitoring the calls, IRS assessed whether telephone assistors gave accurate responses and followed correct procedures when responding to taxpayers’ questions. IRS used the results of that monitoring (an 83.2- percent accuracy rate) as a baseline for setting its 85-percent accuracy goal for the 1999 filing season. IRS also changed the name of this measure from “accuracy of tax law assistance” to “tax law quality.” Results of IRS’ call monitoring during the 1999 filing season showed that IRS achieved an accuracy rate of 72.5 percent—12.5 percentage points below the goal and 10.7 percentage points below the achievement in 1998. IRS told us that one reason for the lower accuracy rate was the broader skill level each assistor needed in 1999. An IRS official explained that in 1998 an assistor may have needed expertise in only one or two topics. However, due to IRS’ change in the equipment used to route telephone calls, as discussed later in this report, an assistor may have been required to answer questions on several more topics in 1999. The official told us that even though the assistors received training on the additional topics, they may not have used the training in several years or received complete training in all areas of a topic. Therefore, with the increase in the number of topics an assistor was responsible for and dated and/or limited training in the topics, the assistors were not always able to answer taxpayers’ questions accurately. IRS also noted that, in some instances, the assistor may have provided a correct answer to a taxpayer’s question, but the call was scored as incorrect because the assistor failed to follow correct procedures. For example, the assistor may have failed to provide his or her employee identification number or inform the taxpayer which form or schedule to use when filing the return. During the filing season, IRS did not capture the data needed to determine the extent to which calls were inaccurate because the assistor provided an incorrect response versus failed to follow prescribed procedures. As of the end of June 1999, IRS began gathering that kind of data. IRS said that it would use the data to help focus training for assistors, which it planned to provide before the 2000 filing season. We reviewed the method that IRS used to score the overall accuracy of its responses to tax law questions, that is, its decision to score the call as inaccurate if any part of the answer was incorrect. This scoring method is one of several analytical approaches for measuring accuracy and is more conservative than most other options because it tends to produce a lower accuracy rate than might result if other scoring methods were used. We also reviewed IRS’ sampling plan for assessing tax law quality and noted that IRS’ results for the filing season were based on telephone calls monitored for 8 hours a day, Monday through Friday, even though IRS provided telephone service 24 hours a day, 7 days a week. The effect on IRS’ accuracy measure, had the measure been based on 24-hours-a-day, 7- days-a-week monitoring, would depend on the extent to which taxpayers called during the nonmonitored hours and the difference, if any, between the accuracy of assistors who answered calls during the monitored hours versus the accuracy of those who answered calls during the nonmonitored hours. As discussed later in this report, IRS could not readily provide data on the number of calls received during the nonmonitored hours, and several IRS officials expressed concern about the skill level of assistors who worked during nonmonitored hours. According to cognizant IRS officials, IRS limited its monitoring to 8 hours a day, Monday through Friday, during the 1999 filing season because it did not have enough adequately trained staff to do more monitoring. That situation improved after the filing season, and, in May 1999, IRS began monitoring calls Monday through Saturday, 16 hours a day. IRS plans to continue that schedule during the 2000 filing season. We also noted the following regarding IRS’ sampling plan: The sampling plan includes a sample of the hours during which calls were monitored and then a sample of telephone calls on various subject matters within those hours. While the sample of hours is essentially random, the sample of telephone calls is a cluster sample within those hours. The effect of a cluster sample could impact the precision estimates by understating them, meaning that IRS’ estimates could be less precise. After discussions with responsible IRS officials, they agreed to examine the effect of cluster sampling on the precision of IRS estimates. The sampling plan called for monitoring about 4,300 calls during the filing season; however, IRS monitored 2,960 calls—31 percent less than the plan. The planned sample size was based on assumptions about the number of incoming telephone calls per hour and estimates of the number of work hours available to monitor calls. IRS did not monitor as many calls as it had planned. There is concern that the use of assumptions and estimates that are not achieved may weaken the precision of the question the monitoring was intended to answer—how accurate are the responses to taxpayers’ questions. Each year, millions of taxpayers call IRS to ask questions about the tax law, inquire about their refunds, resolve account-related issues, and order forms. To assess how well it is serving these taxpayers, IRS measures the “level of access” and “level of service” provided by its telephone system. Level of access indicates the extent to which taxpayers are able to access IRS’ system (i.e., not get a busy signal). However, it does not take into account the extent to which taxpayers get into IRS’ system but are put on hold and abandon their calls before an assistor comes on the line. In effect, level of access considers abandoned calls as successful call attempts. Level of service, on the other hand, considers abandoned calls as unsuccessful call attempts and, thus, measures the extent to which taxpayers are successful in reaching an assistor. As noted in its budget request for fiscal year 2000, IRS has identified level of access as one of its Servicewide performance measures. As we discussed in our testimony on that budget request, we believe that level of service would be the more appropriate Servicewide measure because it indicates the extent to which taxpayers were successful in actually talking to someone in IRS. The Department of the Treasury and IRS have recently expressed agreement with that position. During the 1999 filing season, IRS reported its level of access at 69 percent and its level of service at 55 percent. These accomplishments were well below the 91-percent level of access and the 74-percent level of service IRS reported for the 1998 filing season. In the next chapter, we discuss the significant decline in IRS’ telephone service and various factors that may have contributed to the decline. One of IRS’ customer service standards says that “If you file a complete and accurate tax return and you are due a refund, your refund will be issued in 6 weeks.” In that regard, IRS, each year, reviews samples of individual income tax returns to measure its timeliness in issuing refunds. IRS takes separate samples of returns filed on paper and returns filed electronically. In the past, IRS measured its timeliness in terms of the average number of days it took taxpayers to receive a refund. In 1998, for example, it determined that filers of paper returns received their refunds within an average of 34.1 days while filers of electronic returns received their refunds within an average of 15.1 days. In 1999, IRS used the same sampling methodology but changed the way it measured timeliness. Instead of computing average refund time, IRS computed the percentage of refunds processed within a certain time frame—40 days or less for paper returns and less than 21 days for electronic returns. We agree with this change in methodology. Showing a percentage of refunds that met IRS’ timeliness goal seems more informative than just showing the average number of days it took for taxpayers to receive a refund. As of the end of April 1999, IRS’ refund test results showed that 84.7 percent of the refunds for individual income tax returns filed on paper had been processed within 40 days, and that 99.6 percent of the refunds for returns filed electronically had been processed in less than 21 days. The result for electronic returns exceeded IRS’ goal for 1999 (98 percent) and IRS’ performance in 1998 (98.3 percent). IRS did not set a goal for paper returns. However, IRS’ reported accomplishment in 1999 (84.7 percent) was below its reported accomplishment in 1998 (88.1 percent). Although IRS’ accomplishment in 1999 was close to its accomplishment in 1998, about 15 percent of the taxpayers who had filed paper returns and had claimed a refund did not receive those refunds within 40 days. IRS could not tell us how many days beyond 40 these taxpayers had to wait before receiving their refunds, so the significance of IRS’ untimeliness is unclear. Some of the untimeliness may be due to delays caused by taxpayers. For example, one official told us that some of the delays were because taxpayers who were entitled to the new child tax credit did not claim it. As discussed later in this report, IRS corrected those returns to include the credit, thus adding to its processing time and increasing the amount of the refund. Because the samples for IRS’ refund timeliness measure included returns with errors, such as math errors and errors associated with the child tax credit, we asked IRS what its sample results showed for returns that were error free. We wanted to compare those results to IRS’ customer service standard, which promises a refund within 6 weeks if a taxpayer files a complete and accurate return. IRS said that it did not have that information. IRS took several steps in an attempt to improve telephone service in 1999. But, service did not improve; it deteriorated. Our discussions with IRS officials and analysis of relevant documentation indicated that the deterioration resulted from (1) unrealistic assumptions about the implementation and impact of IRS’ changes and (2) other problems managing staff training and scheduling and implementing new technology. Although we recognize the difficulty in anticipating how new initiatives will work and what their effect will be, the problems IRS encountered when considered together, raise significant questions about IRS’ management of the telephone assistance program in 1999. We saw evidence of (1) assumptions and decisions that appeared to be based on inadequate data or that seemed to ignore existing data, (2) a failure to appropriately time the training of assistors and coordinate the timing of union negotiations that would directly affect productivity and the development of work schedules, (3) inadequate testing and contingency planning with respect to new call routing technology, and (4) the absence of data that management would need to adequately assess what happened in 1999 and provide a basis for making appropriate changes for the 2000 filing season. In an effort to improve its telephone service in 1999, IRS, among other things, extended its hours of operation and began managing its telephone operations centrally, which included implementation of new call routing technology. In 1998, IRS’ customer service representatives were available 16 hours a day, 6 days a week (7 a.m. to 11 p.m., Monday through Saturday), to answer questions from taxpayers about the tax law, their accounts, or their refunds. For the 1999 filing season, IRS expanded that service to 24 hours a day, 7 days a week. IRS officials said that they believed that around-the- clock service would level the demand for service. For example, there has traditionally been heavier demand for telephone service on Mondays. IRS officials speculated that many taxpayers worked on their tax returns during the weekends and tended to call at the first opportunity for assistance on Monday. IRS hoped to reduce such peak demand times and distribute demand more evenly by making assistance available at any time. Although IRS’ customer service representatives were available all night and on weekends, it is uncertain to what extent they were able to assist taxpayers who called during those times. For example, some taxpayers who called to resolve account issues during the night and on weekends did not receive the assistance they needed because normal maintenance requirements caused IRS’ main taxpayer data computer system, known as the Integrated Data Retrieval System (IDRS), to be unavailable, mainly in the early morning hours on weekdays, 8 hours on Saturdays, and all day on Sundays. IRS offered around-the-clock service for account-related issues in 1999 even though it knew that IDRS would not always be available. IRS believed that assistors would be able to serve many taxpayers who called during IDRS downtime by accessing another information system that, according to IRS, was available virtually anytime. However, that other system was insufficient in many cases. According to IRS’ review of a sample of telephone calls to selected service centers on four Sundays during the filing season, 20 percent of the taxpayers were told to call back during the week when IDRS would be available. In 1999, IRS began managing its telephone operations centrally through its Customer Service Operations Center in Atlanta. As part of this centralized management, IRS developed its first national call schedule that projected the volume of calls—for each half-hour—at each of IRS’ 24 call sites and the staff resources that would be needed to handle that volume. As an integral part of its new approach to managing telephone service, IRS implemented a new customer service call router. The router was intended to lessen disparities in the level of service taxpayers receive by sending each call to the first available assistor nationwide who had the necessary skills to answer the taxpayer’s question. To do this the router was to do use real-time information about (1) the nature of each taxpayer’s question and (2) the availability of qualified telephone assistors nationwide. We provide more information on how the new call router was to work and did work later in this chapter. Although the various initiatives just discussed were intended to improve IRS’ telephone service, that service declined significantly during the 1999 filing season. Compared to the 1998 filing season, as shown in table 3.1, level of access decreased from 91 percent to 69 percent and level of service decreased from 74 percent to 55 percent. IRS’ performance in providing telephone service during the 1999 filing season reached its lowest point during the week of February 6, 1999 (see fig. 3.1). During that week, level of access and level of service were 31 percent and 25 percent, respectively, as compared to 92 percent and 77 percent, respectively, for the same week in 1998. Accessibility improved after the initial weeks of the filing season and almost reached the levels achieved in 1998 during the week of March 13. However, unlike 1998, performance then moved steadily downward until the end of the filing season. Our discussions with IRS officials and our review of available documentation indicated that the decrease in telephone service during the 1999 filing season resulted from unrealistic assumptions about the implementation and impact of IRS’ changes. For example, IRS assumed that assistor productivity would increase, but it decreased; IRS assumed that around-the-clock service would level demand more than it did; and IRS assumed that work schedules were adequate, but they proved to be flawed. Among other things, IRS’ assumptions led to discontinuance of a special procedure for handling complex tax law questions, which further contributed to the deterioration in telephone service. In planning for the 1999 filing season, IRS originally projected a slight increase in productivity due to implementation of the new call router. IRS officials expected the call router to increase productivity by routing calls to the first available assistor qualified to answer the taxpayer’s question, thereby preventing assistors from sitting idle. However, according to IRS officials and data, productivity actually decreased in 1999 as compared to 1998. IRS officials cited several factors that might have led to lower-than- expected productivity, including the following: To staff its around-the-clock service, IRS offered experienced seasonal employees permanent positions if they agreed to work the off-hours.According to some officials, the movement of these productive, skilled seasonal employees left a gap during the core hours when most taxpayers seek assistance—from 9 a.m. to 5 p.m. IRS filled the core hours with newly hired, less skilled seasonal employees. Other officials said that there was a skill gap during the off-hours, especially during the overnight shift. However, IRS only had to move staff at a few sites. Of IRS’ 24 call sites, 2 answered taxpayer telephone calls during the overnight shift, and 5 regularly answered calls on Sundays. IRS discontinued use of a call management tool called “auto-available,” which, as soon as a call was completed, automatically routed another telephone call to the assistor. During the 1999 filing season, pursuant to an agreement with the National Treasury Employees Union (NTEU), assistors were placed in a waiting status after each call and remained in that status until they pressed a keyboard button that put them in an available status.IRS officials said that, by definition, this practice added some amount of time to each call, causing other calls to receive busy signals and, thus, lowering accessibility. Changes that required new procedures, new responsibilities, and training for assistors affected productivity. For example, the IRS Restructuring and Reform Act of 1998 required that assistors provide their name and employee identification number at the beginning of each call, which added a little time to each call. Also, according to a cognizant official, the need to train all staff on various provisions of the IRS Restructuring and Reform Act of 1998 and train those assistors who, in accordance with the NTEU agreement, had accepted broader responsibilities in exchange for increased pay grade led to a shortage of assistors who were available to answer the telephone during the early part of the filing season. IRS expected that around-the-clock service would level demand by enabling some taxpayers to call during off-hours, thereby reducing the number of calls coming in during peak times. However, according to cognizant officials, IRS’ expectations about leveling demand did not fully materialize. As a result, IRS underestimated the number of assistors needed to answer incoming call volume during the hours that most taxpayers called and, according to one official, off-hours staff often sat in available status with no call to answer. IRS did not have data readily available to determine the volume of calls received during its expanded hours of telephone service, even though such data would seem important in assessing the impact of around-the-clock telephone service and in developing workload assumptions and staffing plans for the 2000 filing season. After we requested these data, IRS compiled information on the number of calls received on Sundays but was unable to provide data on the number of calls it received during the overnight hours of 11 p.m. to 7 a.m. The information IRS compiled showed that about 2.3 million calls were received on Sundays over IRS’ three main tax law and account telephone lines—about 4 percent of the 57.3 million calls received on those three lines during the filing season. IRS’ expectation that around-the-clock service would level demand and reduce peak demand times may have been unrealistic considering its experience in 1998. According to IRS officials in 1998, IRS first attempted to level demand by extending its service to 6 days a week, 16 hours a day. The expected leveling did not occur, and IRS officials told us, at that time, that it would probably take 4 to 5 years for taxpayers to become familiar with the extended hours of service available. IRS’ original staffing plans were done on the assumption that IRS would have the authority that it had last year to direct staff to work other than their regular work schedule. According to IRS, the new agreement with NTEU limited the extent to which IRS could change an assistor’s regular work schedule. The agreement stipulates that IRS must first seek volunteers before requiring assistors to change their work hours to meet staffing shortages at a site. A cognizant official said that IRS had to quickly redo work schedules to accommodate the volunteer provision, which resulted in significant staff shortages in relation to projected call volumes and, thus, schedules that were not as sound as they should have been. The timing of the completion of the work schedules also presented a problem. IRS’ negotiations with NTEU took longer than IRS expected, and the agreement was not completed until October 1998. IRS officials said the schedules were not made available until December 1998, just a few weeks before the beginning of the filing season. In responding to a survey of IRS call sites by the Treasury Inspector General for Tax Administration, several call site officials said that the timing of the schedules did not allow adequate time to hire and train staff, and that, during the filing season, schedules were changed frequently with little advance notice and did not allow for adequate planning. One result of IRS’ unrealistic assumptions was a decision, before the filing season began, to discontinue a procedure that IRS had used in 1997 and 1998 to handle the more complex calls from taxpayers. Because discontinuance of that procedure had a negative effect on telephone service, IRS reinstated the procedure after the start of the filing season. As we discussed in our 1997 filing season report, IRS studied the topic and length of taxpayers’ telephone calls and found that certain topics resulted in assistors’ spending significantly longer time per call. As a result, IRS revised its procedures so that, in 1997 and 1998, callers with questions in complex areas were automatically connected to a voice messaging system. Taxpayers were asked to leave their name, telephone number, and address and the best time to reach them so that their call could be returned within 2 to 3 business days. IRS’ Examination function supported the Customer Service function by detailing staff to return taxpayer calls from the voice messaging system. According to IRS, routing these potentially lengthy, complex calls to a recording freed up assistors to handle shorter, less complex calls. According to cognizant IRS officials, IRS decided not to use voice messaging for the 1999 filing season because IRS believed that (1) requiring taxpayers to leave messages and then calling them back days later was not the best possible customer service and (2) there would be adequate customer service staff to handle the expected volume of calls as the calls came in. IRS based the expectation that it would have adequate staff on the assumptions that (1) productivity would increase through improved call routing and (2) demand would be leveled through around- the-clock service. However, IRS’ expectations about productivity and demand leveling were not realized, as previously discussed. Therefore, attempting to answer these more complex, lengthy calls with “live” assistors contributed to longer on-hold times and busy signals for other taxpayers. A cognizant official said that the decision to discontinue voice messaging in 1999 was a mistake, considering that these complex calls take longer to resolve. In response to longer on-hold times for assistance and excessive busy signals, IRS reestablished voice messaging in mid-February. According to a cognizant official, IRS plans to continue using voice messaging in 2000. IRS’ new call routing equipment was designed to route a taxpayer’s telephone call where it could most quickly be answered on the basis of the availability of an assistor and the type of question. However, at various times, IRS had to limit its use of the call router because of various problems, such as a lack of standardized programming among call site computers. Also, at the time we completed our audit work, IRS had not reviewed the performance of the call router and, therefore, could not determine if the desired results were achieved or what impact the equipment had on taxpayer access to the telephone system. Before the 1999 filing season, calls were routed to a call site by area code or on the basis of the percentage of the total available staff the site had scheduled to work (known as the staff allocation-based routing system). These methods could have caused service disparities for taxpayers. Calls routed to a busy site might have had long on-hold times or received busy signals, while another site might have had low demand and provided immediate assistance. Calls could not easily be rerouted. Historically, IRS call sites operated with a great degree of independence; there were no comprehensive, uniform standards as to how taxpayer calls would be handled. Therefore, taxpayers could experience a different quality of service depending on where their calls were answered. IRS’ new call router was designed to remedy any disparities by performing “intelligent” routing in two stages. First-stage intelligent routing was to send the call to the site that, on the basis of assistor availability and expected on-hold times, appeared to be the site that would most quickly answer the taxpayer’s call. Second-stage intelligent routing was to be done after the call had been routed to a site and the taxpayer had responded to IRS’ automated telephone service menu, thus indicating the nature of his or her call. If an assistor who was qualified to handle that type of call was not immediately available at that site, the call router was to query real-time accessibility data on computers at all of the call sites. The call would then be routed to the qualified assistor who, on the basis of accessibility data, would be able to answer the call most quickly no matter where the assistor was located. However, at various times during the filing season, problems kept IRS from using intelligent routing. During the first weeks of the 1999 filing season, IRS only did limited second-stage intelligent routing as it tested the system and corrected errors in site computer programming. According to IRS officials, programming at the call sites had to be standardized to accurately route calls. Without correct programming, a call could be routed to a site that actually did not have an assistor available for that type of call. IRS officials said that the programming errors had not surfaced during pre- filing season testing of the system because the volume of calls was not great enough to reveal the errors. IRS corrected these errors and began using second-stage intelligent routing regularly on February 17, 1999. IRS also had to limit its use of first-stage intelligent routing during the 1999 filing season. In response to the low accessibility rate, IRS began using a feature known as selected expanded access (SEA) in early February. When the queue for speaking to an assistor is full, SEA gives taxpayers the option to access automated, interactive telephone services and TeleTax;otherwise, the taxpayers would have received a busy signal. Because IRS did not expect to use SEA, the call router had not been programmed with the capability to centrally determine when a taxpayer should be given access to the automated services. The use of SEA resulted in disparities in access when intelligent routing was used because the programming for sites was not uniform as to when SEA was to be offered. Therefore, IRS discontinued first-stage intelligent routing after it started using SEA. Until the router could be programmed for SEA, IRS switched to the staff allocation-based routing system it used in the 1998 filing season. Once a call was routed to a site under that system, second-stage intelligent routing could still be used to send the call to a specific assistor. IRS staff monitored accessibility data among sites and rerouted calls in case of disparities among sites. After the call router was reprogrammed and tested for SEA, IRS reinstated first-stage intelligent routing on its toll- free line for tax law questions on April 9, 1999. SEA programming for the account and refund lines was completed, tested, and implemented soon after the April 15 filing date. IRS officials had varying views on the impact of the call router problems on accessibility. One official said that some of the early problems with the router decreased accessibility because site computer programming errors caused calls to be sent to sites that did not have assistors available and this caused increased wait times. Another official maintained that routing worked properly at this stage. Still another official characterized the problems with the router as having caused IRS to miss opportunities to improve access, rather than actually causing decreased accessibility. IRS did not do a systematic review of the router’s performance in 1999. According to the IRS project manager, multiple changes to IRS’ telephone operations in 1999 made it impossible to isolate the impact of the router on such things as accessibility and productivity. Absent such information, IRS has no reliable basis for determining whether the router was effective or how, if at all, to improve its effectiveness in 2000. Staff at IRS’ walk-in sites answer tax law questions, distribute tax forms and publications, and help taxpayers prepare their returns and resolve their account issues. As used in this report, the term “walk-in sites” includes IRS’ walk-in offices that are generally open all year and temporary locations that IRS sets up during the filing season. IRS data show that walk-in sites served about 6.2 million taxpayers between January 1 and May 1, 1999—a slight increase over the number served during the same period in 1998. Overall, IRS’ walk-in assistance efforts during the 1999 filing season were positive. For example, IRS enhanced the availability of services at its walk- in offices and expanded the availability of assistance to taxpayers who did not have convenient access to a walk-in office. Taxpayers who completed a customer satisfaction survey as the result of a visit to a walk-in site scored their overall satisfaction with an average of 6.44 on a 7-point scale. However, while IRS made progress in assessing customer satisfaction with walk-in services in 1999, it made little progress in measuring the quality and timeliness of those services. During the 1999 filing season, IRS enhanced the availability of services at its walk-in offices by increasing the number of offices, expanding the availability of Saturday service, and providing targeted EIC assistance earlier than it did in 1998. According to IRS, 236 walk-in offices were open in IRS’ 33 districts during the 1999 filing season as compared to 178 offices during the 1998 filing season. Also, according to IRS, most walk-in offices were open on each Saturday during the 1999 filing season, which was not the case in 1998. In that regard, of the 236 walk-in offices that were open in 1999, 182 provided 4 hours of service on each Saturday during the filing season, and the other 54 provided service on selected Saturdays. By comparison, each of the 178 offices that were open during the 1998 filing season provided service on only 6 Saturdays. According to IRS, walk-in offices served 141,725 taxpayers on Saturdays during the 1999 filing season as compared to 82,722 taxpayers served on Saturdays during the 1998 filing season. As in 1998, IRS provided targeted assistance to potential EIC claimants by scheduling EIC Awareness Days at many walk-in offices. However, unlike in 1998, IRS scheduled these days early in the 1999 filing season. IRS scheduled the EIC Awareness Days for the first six Saturdays of the 1999 filing season as compared to mid-March through mid-April in 1998. This scheduling change was consistent with our July 1998 recommendationthat customer service efforts aimed at EIC claimants be made available early in the filing season. During the 1999 filing season, IRS took steps to expand the availability of assistance to taxpayers who could not easily reach a walk-in office. Those steps included a greater use of alternative ways to provide walk-in service and an effort to improve the availability of tax forms in certain parts of the country. For the 1999 filing season, IRS’ National Office encouraged districts to make more use of nontraditional ways, such as mobile vans and kiosks at retail locations, to provide walk-in service, particularly on Saturdays. The National Office stated that although these initiatives would benefit all IRS customers, they would be of particular value to individuals in rural areas. In response to that direction, local IRS offices used various nontraditional outlets, such as shopping malls, community centers, the offices of state taxing authorities, grocery stores, copy centers, and newspaper inserts to help prepare returns, distribute tax forms and publications, and provide other types of taxpayer assistance in 1999. Additionally, some districts in three regions used mobile vans that went to less-populated, rural locations where taxpayers did not have easy access to a walk-in site. For example, vans in the Georgia District went to locales that were more than 40 miles from the nearest IRS office. Another effort to expand taxpayer service targeted underserved counties. In 1998, IRS’ National Office conducted a study in which it identified 478 counties that it considered underserved, at least concerning the availability of forms. IRS believes that additional forms distribution outlets in these counties may be advantageous. Although districts were encouraged to conduct outreach efforts in any underserved counties in their areas, the districts were not required to report the results of any such efforts to the National Office. In that regard, IRS did not have a formal national outreach program for the underserved counties during the 1999 filing season. Instead, according to a cognizant IRS official, the National Office deferred any nationally coordinated outreach efforts until 2000. Therefore, the National Office had no way of monitoring and determining the success or failure of outreach efforts in 1999. Despite the lack of national monitoring, some IRS districts, on their own initiative, provided the National Office with data on outlets that they had established in underserved counties in 1999. According to officials at those districts, they established community-based outlets in about 50 underserved counties in 1999. According to IRS’ business vision and its walk-in site mission statement, walk-in operations are to provide accessible, high-quality service to the public; reduce taxpayer burden; and ensure compliance with tax laws. IRS recognizes that providing accurate information and serving customers expeditiously and professionally are critical to the success of its walk-in program. However, in our reports on IRS’ 1998 filing season and IRS’ efforts to measure customer service, we discussed the lack of meaningful nationwide data for assessing the performance of IRS’ walk-in sites in terms of quality, timeliness, and taxpayer satisfaction. Our review of the 1999 filing season indicated that IRS had made progress in assessing taxpayer satisfaction with walk-in services but had made little progress in instituting key performance indicators for quality and timeliness. In response to an IRS Internal Audit report issued in 1997, IRS implemented a customer satisfaction survey at its walk-in sites during the 1998 filing season. However, due to printing and distribution problems, the survey for the 1998 filing season was not started until late in the filing season (about mid-March) and, therefore, did not provide a complete assessment of taxpayers’ satisfaction with the walk-in program. For 1999, IRS added questions to the survey, such as a question about how long it took the taxpayer to receive service, and conducted the survey at all walk- in sites during the entire filing season. Results of the walk-in surveys distributed during the first 3 months of the 1999 filing season, as summarized by IRS’ contractor, showed an average overall satisfaction of 6.44 on a 7-point scale. The contractor’s summary also showed that 89 percent of the walk-in customers who completed the surveys indicated a high degree of satisfaction with the services obtained at IRS’ walk-in sites; none of the 10 areas customers were asked to rate (such as convenience of office hours, employee courtesy, and promptness of service) received a score below 6.06; customers who came into an IRS site for help in preparing their tax returns gave the highest satisfaction ratings, while customers who requested a form or publication gave the lowest; and taxpayers whose wait times were less than 15 minutes gave higher satisfaction ratings than customers who waited longer. In response to findings in the 1997 Internal Audit report and an IRS Walk- In Steering Committee report issued in 1998, IRS’ National Office stated that the walk-in program’s quality assurance process had to be improved. The National Office said that IRS would be assessing quality at its walk-in offices—including the accuracy of responses to taxpayers’ questions and the professional treatment of customers—during the 1999 filing season through quality reviews and filing season readiness reviews. However, quality reviews were not done and the National Office did not provide specific guidance on what should be examined during readiness reviews. Quality reviews (during which, regional staff who are unknown to personnel at the walk-in office pose as taxpayers) were scheduled to begin in fiscal year 1999 and were designed to examine the implementation of standardized services, office environment, proper use of equipment, and accuracy of responses at walk-in offices. However, early in the 1999 filing season, IRS decided not to do quality reviews because, according to IRS, it did not have the necessary time or resources to implement the program. Filing-season readiness reviews were conducted by IRS regional officials before the start of the filing season. These reviews are designed to determine if a walk-in office is prepared for the filing season. IRS’ National Office provided regional offices with some overall management guidance but that guidance did not include specific requirements on how the site was to assess the quality of its assistance to taxpayers. Additionally, the National Office did not require that regional offices communicate the results of filing season readiness reviews; any identified problems were to be handled within the region. Regarding timeliness, the National Office established taxpayer wait-time goals at walk-in sites of 30 minutes for return preparation and 15 minutes for all other forms of assistance for both the 1998 and 1999 filing seasons.However, IRS did not have a complete mechanism for monitoring performance. For example, the National Office did not require the regions to report wait times, and most sites had to manually track wait times, thus making the data more prone to error. IRS has an automated system, known as Q-Matic, that is designed to enable accurate tracking of customer wait times. That system was operational at 33 of IRS’ walk-in sites during the 1999 filing season as compared to 8 sites during the 1998 filing season. As customers arrive at walk-in sites with the Q-Matic system, they are to take or are to be given a numbered ticket from the Q-Matic ticket printer. The ticket reflects the estimated wait time for the service, and the system will automatically “call” the customer when it is his or her turn. The system records the time that a customer received a ticket and the time that an assistor started helping the customer. Most of IRS sites did not have the Q-Matic system in 1999. Some of those sites used a manual system, whereby a greeter or receptionist was to record on a taxpayer contact card the time that the taxpayer arrived and an assistor was to record on the same card the time that he/she started to help the taxpayer. Other non-Q-Matic sites relied on greeters or taxpayers to fill out a sign-in sheet. According to a cognizant IRS official, the non-Q- Matic methods of tracking wait times are more prone to error because they are manual. Because complete and reliable data are not available, IRS cannot determine if the walk-in program met the wait-time goals of 15 and 30 minutes during the 1999 filing season. In addition to the help that is available to taxpayers over the telephone and at IRS walk-in sites, taxpayers can receive assistance from various other IRS or IRS-sponsored sources. Those sources include IRS’ World Wide Web site on the Internet; IRS-sponsored volunteer tax return preparation sites; IRS’ Tax-Fax program, through which taxpayers can order and receive forms and instructions via a fax machine; and a corporate partnership program, through which employees of participating corporations can obtain copies of IRS forms and publications at their work sites. Table 5.1 shows that use of each of these sources increased during the 1999 filing season as compared to 1998. As table 5.1 indicates, the most used of these services was IRS’ Web site. Despite the general success of that site, including a favorable assessment by an outside organization, some problems existed with the timeliness and quality of IRS’ responses to questions received from taxpayers via electronic mail (E-mail). Also, although the number of IRS-sponsored volunteer sites increased, as did the number of taxpayers assisted at those sites, many sites reported problems with inadequate staffing, funding, software, and hardware that affected their ability to function effectively. Among other things, IRS’ Web site offers taxpayers hundreds of tax forms and publications for immediate downloading as well as the latest tax information, answers to the most frequently asked questions, details about electronic filing, and special features on items such as the child tax credit. The Web site also offers taxpayers the ability to submit tax law or procedural questions to IRS via E-mail. As shown in table 5.1, taxpayers’ use of IRS’ Web site during the 1999 filing season increased significantly when compared to 1998. The number of “hits” increased by 114 percent, the number of files downloaded increased by 113 percent, and the number of E-mail questions received increased by 89 percent. An independent rating of service on IRS’ Web site on April 15, 1999, stated that the site delivered “remarkable” quality of service on that day. The rating showed that the home page was delivered in an average of 6.9 seconds, with an availability rate of 97.4 percent. The rating showed that the level of service also improved over last year. During the busiest half- hour on April 15, 1999, the average performance time was 17 seconds compared to 23.2 seconds during the peak half-hour on April 15, 1998. Taxpayers who access IRS’ Web site may submit their tax law or procedural questions to IRS for a response via E-mail. This service began as a pilot at the Nashville Customer Service Site during the 1994 and 1995 filing seasons. In March 1996, it became a year-round project in Nashville. Four additional sites were added in 1998 and were on-line by the 1999 filing season. IRS’ goal during the 1999 filing season was to respond to E- mail questions within 2 business days. However, during the 1999 filing season, IRS’ average response time was 4.11 calendar days. Although IRS’ goal was stated in business days, IRS’ information system tracked response times in calendar days due to a coding problem. Nevertheless, IRS decided to use the calendar day data to assess its performance. According to IRS officials, the following factors contributed to longer response times: The questions customer service representatives received via E-mail were more complex than those received over the telephone. This posed a particular problem for customer service representatives who, because of the program’s expansion in 1999, were responding to E-mail questions for the first time and did not have the benefit of past experience. Besides response time, complexity also affected response accuracy. In that regard, IRS tests showed that only 65 percent of the responses to E-mail questions during the 1999 filing season were accurate. Customer service representatives who answered the E-mail questions were also responsible for answering telephone questions. In that regard, the increased telephone demand strained the sections’ resources so that they were unable to manage the E-mail inventory simultaneously. The volume of E-mail questions increased to the point that they had to borrow personnel from IRS’ Compliance function to help provide timely responses. As previously noted, the number of E-mail questions that IRS received between January 1 and April 15, 1999, increased 89 percent when compared to the same time period in 1998. All E-mail customers were given the opportunity to respond to a customer satisfaction survey and provide general comments indicating their satisfaction with the E-mail service. According to IRS data, 3,571 taxpayers responded to the survey between January 1 and April 30, 1999 (representing about 2.2 percent of the total E-mail questions received). Of the taxpayers who responded to the survey, 94 percent said that they were satisfied with the time it took to get a response to their E-mail question. What is unknown, however, is how long it took the respondents to the survey to get answers to their questions. It is possible that those who were satisfied with the response time received their response within 2 business days. Additionally, 79 percent of the respondents said that the response they received via E-mail answered their question. According to a cognizant official, IRS will keep its goal of 2 business days for responding to E-mail questions for the 2000 filing season and will add four new sites to respond to those questions. According to the official, because there is more historical data on the E-mail project, there should be better projections of E-mail volumes and the number of cases assistors can handle during the 2000 filing season. The official added that better projections would result in better work plans, which should better enable IRS to meet its response-time goal by better ensuring that an adequate number of staff are available to meet the demand for service. The official also said that IRS is planning to correct the coding problem before the start of the 2000 filing season so that its system will track response times in business days, which would enable IRS to measure actual response times against its goal. In regard to improving the quality of the E-mail responses, the official said that IRS would be providing assistors with additional training on the six E-mail topics with the highest error rates before the start of the 2000 filing season. IRS sponsors volunteer tax return preparation through its Volunteer Income Tax Assistance (VITA) and Tax Counseling for the Elderly (TCE) programs. VITA offers free tax help to persons with low to limited income, persons who are non-English speaking, and persons with disabilities. TCE offers free tax help to elderly taxpayers. IRS reported that 7,384 VITA and TCE sites had assisted 1,907,151 taxpayers during the 1999 filing season. Those numbers compared favorably to the 1,718,995 taxpayers assisted at 5,783 sites last year. However, according to IRS reports, (1) sites in three IRS regions reported a lack of staff to adequately implement the VITA program, (2) sites in three regions reported problems with software and hardware, and (3) sites in two regions reported funding and equipment problems that hampered their ability to file returns electronically. According to IRS officials, these problems affected the sites’ ability to serve taxpayers effectively. Congress and IRS have long been concerned about noncompliance with the eligibility requirements for the EIC. During the past several filing seasons, IRS implemented a number of efforts aimed at reducing that noncompliance. Generally speaking, those efforts involved (1) the denial of EIC claims that were not accompanied by valid SSNs and (2) in-depth reviews of EIC claims that met certain criteria. In 1999, IRS continued those efforts and stopped hundreds of millions of dollars in erroneous EIC payments. IRS also implemented new procedures in 1999, as mandated by TRA97, that require certain taxpayers to recertify their eligibility for the EIC before IRS approves their claim. According to IRS data, many taxpayers who had an EIC claim denied for tax year 1997 and were required to recertify did not claim the EIC on their tax year 1998 returns (i.e., the returns filed in 1999), thus indicating that the procedures may have helped reduce the number of improper EIC claims. What is unclear is how many of those taxpayers, if any, were entitled to the EIC but either could not understand the recertification process or found it too burdensome. In that regard, our review identified certain opportunities to streamline the recertification process and thus make it less burdensome to taxpayers and IRS. Our review also found that IRS service centers were not consistently following national guidelines for recertification, which could result in disparate treatment of taxpayers. As IRS processes individual returns, it looks for computational errors made by taxpayers or their representatives in preparing the returns. When such errors are identified, IRS can automatically adjust the return through the use of its math error authority. During the first 6 months of 1999, according to data provided by IRS, it stopped about $412 million in erroneous EIC payments as a result of its math error program. Many of the EIC-related math errors corrected by IRS in 1999 involved invalid SSNs. In 1996, Congress authorized IRS to treat invalid SSNs as math errors, similar to the way it had historically handled computational mistakes. Thus, IRS has the authority to (1) automatically disallow, through its math error program, any deductions and credits, such as the EIC, associated with an invalid SSN and (2) make appropriate adjustments to any refund that the taxpayer might be claiming. As shown in table 6.1, the number of taxpayers claiming the EIC in 1999 dropped 1.9 percent from 1998, while the number of EIC-related math errors involving SSNs declined by more than 27 percent. The decrease in the number of EIC math errors involving invalid SSNs may indicate that fewer taxpayers are attempting to claim an EIC to which they are not entitled. It may also reflect that prior IRS efforts to alert taxpayers who had used invalid SSNs caused those taxpayers to correct the problem before filing their next year’s return. Other types of EIC noncompliance are not as easy to identify as math errors. Those types can be detected only through an audit. In 1999, IRS continued to target for in-depth review certain types of EIC claims that IRS had identified as the main sources of EIC noncompliance. These targeted EIC claims include those that involve (1) the use of a qualifying child’s SSN on multiple returns for the same tax year, (2) erroneous claims of head-of- household filing status, and (3) misreported income. Taxpayers whose returns were identified for inclusion in one of these programs were to be audited to determine if their EIC claims were valid. For fiscal year 1999, IRS anticipated a potential caseload of 421,393 cases involving multiple uses of the same qualifying child’s SSN. However, the actual caseload was 344,572 because taxpayers had either filed their tax year 1998 returns without the questionable SSN (57,024) or did not file any tax year 1998 return (19,797). As of August 28, 1999, IRS had completed audits on 204,912 cases (out of the 344,572) and had recommended that $379.6 million in erroneous claims not be paid. Although filing status per se does not affect either EIC eligibility or amount (except that married taxpayers filing separate returns are ineligible for the EIC), IRS’ April 1997 study had shown that erroneous filings as head of household often occurred with an EIC overclaim. From October 1, 1998, to August 28, 1999, IRS had completed 256,365 audits examining taxpayers’ head-of-household status and recommended that $517.1 million in erroneous claims not be paid. IRS’ misreported income projects focus on EIC claims that (1) appear to be inflated by the inclusion of nonqualifying income, such as investment income, in the computation of earned income or (2) involve earned income, such as self-employment income, that can be used to qualify for the EIC but cannot be verified through a third party. From October 1, 1998, to August 28, 1999, IRS had completed audits of 13,829 returns in these projects and recommended that $7.7 million in erroneous claims not be paid. TRA97 requires that taxpayers who were denied the EIC through IRS’ deficiency procedures (i.e., as the result of an audit) must recertify their eligibility before they can claim the EIC again. This provision became effective beginning with tax year 1997 returns filed in 1998. As a result, taxpayers who were denied the EIC on their tax year 1997 returns were required to recertify with their tax year 1998 return if they claimed the EIC on that return. TRA97 also has provisions that are intended to prevent a taxpayer from receiving an EIC for (1) the next 10 years if IRS, as a result of its audit, determined that the taxpayer had fraudulently claimed the credit or (2) the next 2 years if IRS determined that the taxpayer negligently claimed the credit. After the 10 or 2 years expire, the taxpayer has to recertify the next time he or she claims the EIC. IRS has a specific indicator that it can put on its master file of taxpayer accounts to identify taxpayers who are required to recertify. These recertification requirements appear to have further deterred improper claims, but the process may confuse taxpayers and unnecessarily delay the processing of their returns. To recertify for the EIC, IRS requires that taxpayers attach a Form 8862 (Information to Claim Earned Income Credit After Disallowance) to the next tax return they file that includes an EIC claim. If a taxpayer claims the EIC without attaching a Form 8862, IRS is authorized to disallow the claim as a math error. According to IRS guidelines, service centers were to review the returns of all “required to recertify” taxpayers who claimed the EIC on their tax year 1998 returns and filed a Form 8862. If a “required to recertify” taxpayer claimed either the same EIC-qualifying child who was disallowed on the tax year 1997 return or claimed a new EIC-qualifying child, the return was to be examined. The taxpayer’s entire refund was to be held until IRS determined whether the taxpayer was entitled to the EIC. If the taxpayer did not claim the disallowed EIC-qualifying child and did not claim a new EIC-qualifying child, an audit was not required, and the taxpayer’s refund was to be released. While examining the returns of taxpayers who are required to recertify, IRS notifies them that their refunds are being withheld pending a review of the EIC claim and that certain documentation is required. The documentation IRS expects from taxpayers includes copies of birth certificates and Social Security cards; documents, such as school records, to verify that the child lived with the taxpayer; and documents, such as canceled checks for household expenses or child support payments, to verify that the taxpayer supported the child. If a taxpayer provides the necessary documents and those documents support the taxpayer’s EIC claim, the claim is to be allowed and the taxpayer would not have to be recertified again for future EICs. Otherwise, the taxpayer’s claim is to be denied. As of January 30, 1999, IRS had identified 197,625 taxpayers who were denied the EIC on their tax year 1997 returns (the returns filed in 1998) through IRS’ deficiency procedures. These taxpayers would have been required to submit a Form 8862 with their tax year 1998 return if that return included an EIC claim. As of August 28, 1999, according to IRS, of the 197,625 taxpayers, (1) 23,617 filed tax year 1998 returns claiming the EIC and attached a Form 8862 and (2) 63,372 filed returns with EIC claims but did not attach a Form 8862. IRS, using its math error authority, denied the 63,372 claims that were not accompanied by a Form 8862. Of the taxpayers whose claims were denied, 6,992 subsequently submitted a Form 8862 after receiving IRS' math error notice. IRS officials believe that the low number of taxpayers trying to get recertified in 1999 may indicate that many of the taxpayers who were disallowed the EIC in 1998 were not eligible for the credit. Although it is too early to assess the effectiveness of IRS’ recertification process, we did identify opportunities for streamlining the process. We discussed some of these opportunities in a July 1999 letter to IRS’ Chief Operations Officer. In that letter, we discussed the following concerns we had with correspondence that IRS used to communicate with taxpayers who were involved in the recertification process: The form letter that IRS used to tell taxpayers that their EIC claims were disallowed contained irrelevant information pertaining to fraud and negligence. We expressed the belief that the language used could cause some taxpayers to not file a claim to which they might be entitled. IRS agreed to modify the letter. Taxpayers could be confused because two form letters used by IRS cited different time frames as to when taxpayers may expect their refunds. One letter said 30 days while the other said 8 weeks. IRS agreed to make the time frames consistent. A letter and form that IRS used to tell taxpayers that IRS needed additional information to verify their EIC eligibility could burden taxpayers by causing them to send much more documentation than called for by IRS’ operating procedures. IRS said that it would revise the letter and form. In addition to our concerns with IRS’ correspondence, we identified two problems with the recertification process. The first problem involves apparently unnecessary steps in the process that create additional burden for IRS and taxpayers; the second problem involves inconsistent procedures that could result in disparate treatment of taxpayers. “…To demonstrate current eligibility, the regulations require the taxpayer to complete Form 8862… The Treasury Department and the IRS anticipate that the Commissioner of the IRS may require taxpayers to provide documentary evidence in addition to Form 8862. Whether or not the Commissioner requires taxpayers to provide documentary evidence in addition to Form 8862, the Commissioner may choose to examine any return claiming the EIC for which Form 8862 is required…The Form 8862 is designed to lead the taxpayer through the EIC tests of entitlement. Because Service Center Examination will look at all 1998 returns of “required to recertify” taxpayers who file Form 8862, the actual taxpayer entries on the Forms 8862 will not determine entitlement in Processing Year 1999. However, it is anticipated that in future years, the Form 8862 will be used to determine entitlement in pipeline processing.” Under current procedures, taxpayers may have to wait from 30 to 60 days after filing their returns with the required Form 8862 before receiving IRS’ request for supporting documentation. Taxpayers would then have up to 30 days to gather the documents and submit them to IRS. According to some service center officials, this exchange of correspondence creates additional burden on both taxpayers and IRS and delays return processing. The second problem involves an inconsistency in the procedures followed by the three service centers we visited. One service center, after reviewing the Forms 8862, sent Letter 525 and Form 886-H to those taxpayers who claimed the same EIC-qualifying child who had been disallowed or claimed a new EIC-qualifying child. Letter 525 tells taxpayers that IRS denied their EIC claim but will reconsider that denial if the taxpayers provide supporting documents within 30 days. The second service center did not summarily deny taxpayers’ EIC claims after reviewing the Forms 8862. Instead, that center sent Letter 566A and Form 886-H to those taxpayers. Letter 566A tells taxpayers that their returns are being examined and that they must provide, within 30 days, the documents listed on Form 886-H before they can be recertified. If taxpayers did not respond within 30 days or if their responses were insufficient, the service center sent Letter 525 to inform the taxpayers that their EIC claims were denied. The third service center used varying recertification procedures, depending on its workload. When its workload was not too heavy, the service center sent taxpayers Letter 525, denying the EIC. However, when its workload became too heavy, the service center changed procedures and sent taxpayers Letter 566A. Officials at this service center said that it is time-consuming to prepare a Letter 525, which requires an explanation to the taxpayer as to why he or she did not qualify for the EIC. As a result, when workload became heavy, the service center relied on Letter 566A to allow the center some additional time before “auditing” the return. According to IRS’ recertification guidelines, the first service center was following prescribed procedures and the second and the third centers, although seemingly more customer friendly, were not. In explaining the rationale for sending a Letter 525 so quickly, an IRS official at the first center said that the process would be unnecessarily delayed for at least 30 days, as in the case of the second and third service centers, waiting for a response to Letter 566A before denying the EIC. The official pointed out that taxpayers who immediately receive a Letter 525 still have the opportunity to submit documentation and prove their entitlement to the EIC. As of October 29, 1999, IRS had received about 126 million individual income tax returns, which is an increase of about 2 percent compared to the same time last year. The use of electronic filing increased at a much more robust pace (about 19 percent). This increase in electronic filing continued a period of continual growth since 1996. Even with the increase in electronic filing, about 96 million tax returns (77 percent) were filed on paper in 1999. IRS, in 1999, tested several initiatives to increase the use of electronic filing. As of November 2, 1999, IRS had not compiled the necessary data to assess the impact of those initiatives on electronic filing. There are three types of electronic filing: (1) traditional, whereby returns are transmitted through a third party (such as a tax return preparer or electronic return transmitter) known as an electronic return originator; (2) on-line, whereby returns are transmitted by the taxpayer through an on- line intermediary using a personal computer and commercial software; and (3) TeleFile, whereby returns are transmitted by the taxpayer over the telephone lines using a Touch-Tone telephone. Although IRS has not done the kind of comprehensive analysis needed to fully assess the costs and benefits associated with these alternative filing methods, it assumes that the methods save IRS money by, among other things, significantly reducing the number of errors that IRS has to correct. Electronic filing options include built-in checks that are designed to catch certain taxpayer errors, such as computational mistakes, in advance so that they can be corrected by the taxpayer before IRS takes possession of the return. Also, returns filed electronically bypass the error-prone manual procedures that IRS uses to process paper returns. Table 7.1 shows that the use of traditional electronic filing and on-line filing increased in 1999, while the use of TeleFile decreased. IRS officials cited several factors, as follows, that may have contributed to the increase in electronic filing in 1999: IRS entered into new partnerships with private sector companies to broaden the electronic services accessible through IRS’ Web site. As part of these arrangements, IRS placed hyper-links from its Web site to the partners’ Web sites, and the partners developed initiatives, such as free electronic filing and free tax preparation software, to increase electronic filing. The number of electronic return originators (ERO) increased from about 82,000 in 1998 to about 90,000 in 1999. Also, some EROs charged taxpayers less to file electronic returns in 1999, and some offered free electronic filing to taxpayers who met certain criteria. Due, in part, to IRS’ marketing and education efforts, taxpayers are becoming more familiar and comfortable with electronic filing and therefore are more likely to file electronically. Another step that IRS took in 1999, which could have a significant positive effect on future growth in electronic filing, was to test several initiatives directed at making electronic filing paperless. One aspect of electronic filing that has been cited consistently as a barrier to greater use is the requirement that electronic filers continue to send IRS certain paper documents. For example, except for taxpayers who use TeleFile, taxpayers who file electronically have had to submit a paper signature document (Form 8453) along with copies of their Wage and Tax Statements (Form W-2). Also, taxpayers who file electronically and have a balance due have had to mail a check and payment voucher to IRS. To make electronic filing paperless and thus more attractive to potential users, IRS tested four initiatives in 1999. Two initiatives enabled certain taxpayers to use electronic signatures; the two other initiatives provided electronic payment alternatives to taxpayers who owed money. IRS, during the 1999 filing season, tested two signature alternatives that waived the need for taxpayers who participated in these tests to submit Forms 8453 and W-2s. In one test, taxpayers filing electronic returns prepared by about 2,500 participating EROs used a self-selected personal identification number (PIN) instead of completing a Form 8453. An IRS official told us that as of October 10, 1999, about 497,000 taxpayers had used the PIN option to sign their tax returns. According to a cognizant official, IRS did not encounter any problems during this test. IRS surveyed practitioners to ascertain, among other things, if they believed that the PIN option increased their electronic filing business. A July 1999 report on the results of IRS’ survey noted that “most practitioners did not believe that PINs increased their number of returns, but it did make those returns that they otherwise would less burdensome.” An official of the largest national tax return preparation company told us that IRS’ test was a good start. However, he mentioned the following two features of the process that he would like to see changed: Although participating taxpayers no longer have to send IRS a paper signature form, the process is not paperless. Taxpayers still have to sign an authentication worksheet that the preparer is to keep on file in case there is any dispute about the return’s authenticity. The official felt that the PIN was sufficient to authenticate the return and would like to see the authentication worksheet eliminated. On a joint return, both taxpayers must be present to enter their own PINs. While understanding the intent of the requirement, the official thought that it was unrealistic to expect both taxpayers to be present when the return is being prepared. The need for taxpayers to be present in the practitioner’s office to enter their PINs was mentioned in IRS’ July 1999 report as the “greatest difficulty” with the use of PINs. The report recommended further exploration of that issue. To increase the use of on-line filing, IRS, in its second alternative signature test, identified 12 million taxpayers who had prepared their own tax returns in 1998 using tax preparation computer software but who had filed on paper. IRS mailed those taxpayers a postcard instead of a paper tax package. The postcards provided each taxpayer with a unique E-file Customer Number (ECN) and informed them that, by using that number, they could file a totally paperless tax return on-line. As of October 10, 1999, according to IRS, about 660,000 taxpayers had used ECNs as their signatures. IRS encountered one problem during the early phase of this test. The contractor who had transmitted the on-line returns mistakenly blocked out the taxpayers’ ECNs; therefore, no legal signature existed on some returns. (IRS officials told us that they did not know how many taxpayers were affected by this problem.) The contractor sent a letter to affected taxpayers telling them that it had made a mistake and that the taxpayers would have to submit a paper signature document. IRS surveyed taxpayers who had used ECNs to determine if this signature alternative encouraged them to file on-line. As of November 2, 1999, IRS had not finished analyzing the results of that survey. However, according to IRS, about 54 percent of the survey respondents said that the ECN made them more likely to file electronically. Officials responsible for the electronic filing effort told us that, for the 2000 filing season, IRS (1) hopes to double the number of EROs participating in the PIN project and (2) intends to increase the number of taxpayers who are allowed to use ECNs. In 1999, for the first time, many taxpayers who electronically filed balance due returns could electronically pay their balance due in one of two ways—either by credit card or by direct debit from a checking or saving account. On-line filers who used Intuit software packages after February 26, 1999, when testing of the system was completed, were able to indicate on-line when filing their tax returns that they wanted to pay their balance due by credit card. Taxpayers who used traditional electronic filing or TeleFile, as well as taxpayers who filed on paper, could charge their balance due by credit card with a toll-free telephone call to a private company that processed the credit card payment. IRS’ contractor for the pay-by-phone credit card program encountered a problem that resulted in about 13,700 payments being processed improperly. Even though the taxpayers’ credit card accounts were charged on April 15, 1999—the filing and payment deadline—these payments were treated as advanced payments of the taxpayers’ tax year 1999 tax liability and resulted in a balance due for the 1998 tax year. According to a cognizant IRS official, about 1,000 of these taxpayers received a notice from IRS telling them that their 1998 tax year payment was delinquent. IRS was able to stop notices from going to the other taxpayers. Therefore, these taxpayers did not know that their credit card payment was originally processed incorrectly. According to IRS, the contractor contacted all of the affected taxpayers and informed them that IRS had corrected its records and that the contractor, not IRS, had made the mistake. Taxpayers filing electronic balance due returns could also pay their balance due by a direct debit to their checking or savings account through an automated clearing house. Taxpayers using the direct debit option were able to file early and postpone their payment until April 15. However, the direct debit option was only paperless for on-line filers who participated in the ECN test. Other on-line filers and traditional electronic filers who chose the direct debit option had to submit a Form 8453, which contains a disclosure statement that requires the taxpayer’s signature authorizing the direct debit. On-line filers who participated in the ECN test were using the ECN as their signature and had to indicate, via an on-line prompt, that they wanted to use the direct debit option. IRS informed us that there were virtually no problems encountered in processing the debit payments. Both the credit card and direct debit options eliminated the need for taxpayers to send checks and payment vouchers to IRS. As of October 8, 1999, about 53,000 and 75,000 taxpayers had paid their tax liabilities by credit cards and direct debits, respectively. An IRS spokesman said that IRS was unable to determine the impact of these payment options on persons’ decisions to file electronically. Officials responsible for the electronic filing effort told us that IRS intends to expand the electronic payment options. For the 2000 filing season, IRS intends to expand the credit card payment option to taxpayers who file two other forms—Form 1040-ES (Estimated Tax) and Form 4868 (Request for Extension). IRS also plans to expand the use of the direct debit option to TeleFile users in 2000. IRS, in an effort to increase the use of TeleFile, initiated a pilot program in Indiana and Kentucky to study the possibility of allowing taxpayers the opportunity to file both their federal and state tax returns with one telephone call. As of October 29, 1999, about 107,000 taxpayers in these two states had filed their federal and state tax returns via this pilot program. Despite the joint federal/state TeleFile pilot and the credit card payment option previously discussed, the number of TeleFile returns filed in total, as well as by taxpayers in Indiana and Kentucky, decreased slightly from the numbers filed in 1998. Officials in IRS’ Office of Electronic Tax Administration suggested the following two possible reasons for the decrease in TeleFile use: Taxpayers who would have been eligible to use TeleFile in prior years were ineligible in 1999 because they claimed the new student loan interest deduction or new education credits, which cannot be claimed by someone filing via TeleFile. Taxpayers who could have used TeleFile might have switched to on-line filing. Some tax law changes mandated by TRA97 took effect during the 1999 filing season. Those changes included (1) a new basic child tax credit, (2) an additional child tax credit for taxpayers with three or more eligible dependents, and (3) various new education-related deductions and credits. The added complexity associated with the child tax credit provisions led to numerous taxpayer errors and increased IRS’ processing workload. A tax law change affecting certain taxpayers with dependents provided a maximum nonrefundable child tax credit of $400 for each qualifying child.Taxpayers with children must first determine if their children qualify for the credit using criteria that differ from the criteria for determining if a child qualifies for a dependent exemption or for the EIC. After taxpayers determine that they have qualifying dependents, they are to fill out an 11- line worksheet to determine the amount, if any, of their credit. The credit is phased out over various income levels on the basis of filing status and cannot exceed the taxpayer’s tax liability. In addition, the credit is reduced by other credits, including the child care credit, the credit for the elderly and disabled, and education credits. About 28 percent of the individual income tax returns filed as of August 27, 1999, listed one or more dependents who the taxpayers believed qualified for the child tax credit. But, according to IRS data, only about 20 percent of the filed returns actually included a child tax credit claim. As noted in the preceding paragraph, additional criteria beyond the existence of a qualifying child would have made some of these taxpayers ineligible. However, as discussed in the next paragraph, some taxpayers who were eligible for the credit did not claim it. According to data provided by IRS, the child tax credit was the fourth most common source of errors made on individual income tax returns filed in 1999. As of July 16, 1999, IRS had mailed about 571,000 notices to taxpayers whose returns contained errors relating to that credit. About 88 percent of these errors were made by taxpayers who prepared their own returns. Service center processing officials estimated that about one-half of these taxpayers failed to take the credit, even though they checked the box indicating that they had an eligible dependent and other information on the return (e.g., amount of income) indicated that they were eligible. The other one-half erred in calculating the credit amount. The need to correct these mistakes added to IRS’ processing workload and may have led to a refund delay for some taxpayers. During the filing season, IRS revised its procedures for dealing with returns in which taxpayers failed to take the child tax credit even though information on the return indicated that they were eligible. Initially, IRS adjusted such returns to include the credit, without verifying the dependent’s age. To qualify, a dependent must be under 17. After March 2, 1999, IRS began verifying the dependent’s age using information from the Social Security Administration. Because this verification was done manually, service center officials told us that it significantly increased their workload, although they were unable to quantify the workload increase. In response to the numerous errors, IRS issued a press release in early March 1999 cautioning taxpayers to carefully check the child tax credit instructions before filing their returns. In addition, IRS is developing revised instructions for next year’s tax packages in an effort to reduce taxpayer errors during the 2000 filing season. The additional child tax credit was designed to benefit taxpayers with three or more dependents and could be a refundable credit. To benefit from this credit, taxpayers had to meet conditions beyond those for the basic child tax credit and were required to fill out an additional form to determine how much, if any, additional credit they were due. Because of the numerous limitations placed on the additional child tax credit, very few taxpayers benefited from this credit. Only about 4 percent of filed returns listed three or more eligible dependents. For taxpayers with three or more eligible dependents, the credit was limited by the amount of Social Security and Medicare tax withheld, if any; one-half of any reported self-employment tax; and any Social Security and Medicare tax on tip income not reported to the taxpayers’ employers. The credit was then reduced by amounts claimed for the EIC, the credit for the elderly and disabled, education credits, and any excess Social Security tax withheld. As a result, about 86 percent of the returns with three or more eligible dependents that were filed as of August 27, 1999, did not claim any additional child tax credit because the taxpayers either were not eligible for the credit or overlooked taking the credit even though they were eligible. Overall, only about 1/2 of 1 percent of all tax returns showed some amount for the additional child tax credit. As of July 16, 1999, IRS had sent out about 40,000 notices to taxpayers who had made a mistake in claiming the additional child tax credit. During the 1999 filing season, several education-related deductions and credits took effect, including student loan interest deductions and the Hope and lifetime learning credits. As of August 27, 1999, about 3 percent of all returns claimed a student loan interest deduction, and about 4 percent of the returns claimed education credits. Various witnesses at a May 25, 1999, Oversight Subcommittee hearing on tax law complexity expressed concern about the overall complexity associated with the array of education assistance programs now available in the tax code. However, unlike the child tax credit, the impact of such complexity in terms of the number of taxpayers who inappropriately claimed the credit or deductions or who failed to claim a credit or deduction to which they were entitled generally cannot be determined from the face of the return. An audit of the return and supporting documents would be needed to determine if taxpayers claimed the proper amount or were entitled to amounts that they did not claim. Even with these limitations, IRS, as of July 16, 1999, had sent about 53,000 notices to taxpayers who had erred in claiming education-related benefits. Nearly all of these errors were by taxpayers who claimed more student loan interest than the $1,000 maximum allowed by the law. For the 1999 filing season, IRS made significant changes to the computer hardware and software that it uses to process returns and remittances. IRS accomplished these changes without any discernible disruption to the processing of returns, refunds, and remittances. One major change involved replacement of the returns processing system at all 10 service centers and replacement of the remittance processing system at 6 centers (the other 4 centers were to have their remittance processing systems replaced in time for the 2000 filing season). According to an IRS official responsible for this replacement project and processing officials at two service centers, the transition to the new systems went well, and workloads were processed as expected. Also, our analysis of various filing season data and comparisons of those data to similar data for the 1998 filing season disclosed nothing to indicate that this replacement project caused any significant processing delays in 1999. IRS continually worked throughout the filing season to resolve various system problems, most of which did not affect taxpayers. For example, there were problems with the transport system on the remittance processing system, which required additional personnel to perform maintenance. IRS reported one problem with the remittance processing system that failed to record some taxpayers’ payments, which led to the issuance of about 2,400 erroneous balance due notices. According to IRS, it quickly contacted the affected taxpayers to provide the correct information. A second major change involved consolidating service centers’ mainframe computer equipment at IRS’ two computer centers in Martinsburg, WV, and Memphis, TN. At the beginning of the 1999 filing season, computer operations for three service centers had been consolidated. IRS projects that the other seven centers will be consolidated by January 2001. Because this project is ongoing, IRS is continuing to resolve problems, such as isolated printer problems affecting the printing of address labels for Examination and Collection cases. An official at one of the consolidated service centers said that this problem delayed the issuance of some notices and that, in some cases, IRS personnel resorted to handwriting labels. However, the IRS official stated that these problems had no effect on filing season-related taxpayer notices. A successful filing season requires that IRS effectively manage a wide range of programs, through which it assists taxpayers in meeting their filing requirements; processes filed returns and related tax payments; and takes certain steps to help ensure that taxpayers’ refund claims, especially those involving the EIC, are valid. There were many positive accomplishments during the 1999 filing season. IRS expanded the availability of walk-in services, stopped hundreds of millions of dollars in erroneous EIC payments, saw a sizable increase in electronic filing, and implemented a major new processing system without significant disruption. However, there were also some problems—the most important being a significant decline in telephone service despite IRS’ efforts to improve that service. IRS officials cited several factors that contributed to the decline in telephone service, some of which might be due to IRS’ inexperience with its new way of managing telephone operations and its new call routing technology. However, other factors, such as inadequate planning and decisionmaking that appeared to be based on inadequate data or that seemed to ignore existing data, may be symptomatic of basic management weaknesses or challenges. To better understand the challenges facing IRS and better position ourselves to propose constructive solutions, we are reviewing, in more detail, IRS’ management of its telephone operations. Thus, we are making no telephone-related recommendations in this report. In addition to the decline in access to the telephone system, the quality of answers that taxpayers received when they reached an IRS assistor also dropped. IRS recognizes the need to provide further training to its assistors, and officials said there are plans to do so before the start of the 2000 filing season. That training should help improve quality. We also identified some features of IRS’ methodology for measuring quality during the 1999 filing season that warranted IRS’ attention. For example, IRS monitored about 31 percent fewer telephone calls than provided for in its sampling plan, which could affect the precision of IRS’ estimates of quality. With respect to our other concerns about IRS’ methodology, IRS has agreed to examine the effect of cluster sampling on the precision of its estimates and has extended the hours during which it is monitoring calls. Even with the increase in monitoring hours, IRS’ measure of quality might still provide different results than it would if all hours were monitored. Besides expanding the availability of walk-in services in 1999, IRS did a better job of measuring customer satisfaction with those services. However, it made little progress in measuring service quality and timeliness. Without meaningful nationwide performance data, IRS cannot determine if the walk-in program is meeting its objectives and goals, and thus whether it is an effective method of providing service. IRS also increased the information and services it provides through various other methods, such as its Web site. However, the substantial increase in use of the Web site’s E-mail service strained resources and apparently contributed to inaccurate responses and slow response times. Due to the way IRS tracked response times (calendar days v. business days), it could not actually determine how close it came to meeting its timeliness goal. We are not making any recommendations in this area because IRS told us that it plans to (1) change the response-time tracking system to align it with the way the goal is stated and (2) provide assistors with additional training on the E-mail topics with the highest error rates. IRS data strongly suggest that the continuing emphasis on EIC noncompliance has produced significant results. IRS identified many erroneous claims by validating SSNs and scrutinizing certain EIC claims. In addition, the many taxpayers who had their EIC denied for tax year 1997 and did not claim the EIC for tax year 1998 would seem to indicate that the new recertification procedures had a positive effect. However, it is possible that some of those taxpayers who did not claim the EIC for tax year 1998 may have been entitled to the credit but did not understand the recertification process or found it too burdensome. IRS has agreed to make some changes in that regard in response to our July 1999 letter. In addition to those changes, we believe that the form taxpayers are required to submit to be “recertified” (Form 8862) may mislead them to believe that the information they provide in response to the questions on the form will be sufficient for recertification. Taxpayers may become discouraged and confused when they realize that the information is not sufficient and, instead, that submission of Form 8862 leads to still another IRS request for documents. Taxpayers might rightfully wonder why, if the documents required by later correspondence are essential for recertification, IRS did not tell them that those documents were required when it first notified them about the need to recertify. In addition, even though there was national guidance on the recertification process that service centers were to follow, the guidance was not being followed consistently, which could result in disparate treatment of taxpayers. Tax law changes dealing with the new child tax credit seem to have added complexity to filing a tax return, as evidenced by the numerous errors and increased processing workload for IRS. IRS is planning to revise the tax package instructions for tax year 1999 (filing year 2000) in an attempt to reduce taxpayer confusion. Our analysis of various performance data indicated that IRS successfully implemented a new processing system. One piece of evidence that we initially thought might indicate some problem with the new system was that IRS took longer than 40 days to issue about 15 percent of the refunds on paper returns. Further inquiry indicated, however, that IRS’ performance in 1999 was close to its performance in 1998. However, we are still concerned that IRS took more than 40 days to issue so many refunds. There may be valid reasons, but IRS was unable to provide us with the kind of data needed to make that determination. Such data are important if IRS wants to identify ways that it might improve its performance. We recommend that the Commissioner of Internal Revenue direct the appropriate officials to take the following steps: Analyze the effect of not achieving the planned sample size for monitoring the accuracy of responses to tax law calls and use the results of that analysis to design the sample used in future monitoring. Implement a program for assessing the performance of IRS’ walk-in sites. As part of that program, require that quality reviews be done, provide sufficient guidance to ensure that the reviews are done consistently and address appropriate issues, and require that data on the results of quality reviews and wait-time monitoring (whether done automatically or manually) be reported to a central location for analysis. If IRS does not rely on Form 8862 for recertification purposes, discontinue its use. If IRS continues using Form 8862 for recertification purposes, redesign the form to include reference to the documentation listed on Form 886-H and any other documentation that IRS thinks is necessary for recertification so that taxpayers who are required to recertify know as early as possible what documentation is required for recertification. Ensure that all service centers implement the recertification procedures according to national guidelines to avoid possible disparate treatment of taxpayers. Analyze the results of the refund timeliness tests to determine, among other things, why about 15 percent of the refunds took longer than 40 days to issue and what the test results showed for returns that were filed error- free. We requested comments on a draft of this report from IRS. We obtained IRS’ written comments in a December 3, 1999, letter from the Commissioner of Internal Revenue (see app. II). On December 3, 1999, we also met with various representatives from the office of IRS’ Chief Operations Officer, which is responsible for the various programs we reviewed, to discuss IRS’ comments. In his letter, the Commissioner said that (1) our draft report provided a fair and balanced assessment of IRS’ efforts to improve processing while providing taxpayers with top quality service and (2) IRS would make every effort to resolve the issues noted in the draft report. Regarding our recommendation to analyze the effect of not achieving the planned sample size for monitoring the accuracy of responses to tax law calls, the Commissioner said that IRS has completed such an analysis and is in the process of filling 20 additional monitoring positions. He said that with the additional staff, IRS will be able to meet the desired sampling plan for tax law and other telephone calls. IRS’ actions appear responsive to our recommendation. IRS agreed with our recommendation regarding the implementation of a program for assessing the performance of IRS’ walk-in sites. The Commissioner said that, in fiscal year 2000, IRS will implement a quality review program to measure the quality and timeliness of services at walk- in sites. According to IRS, training was conducted in October 1999 to ensure consistency among the quality reviewers, and quality review results and wait-time monitoring results will be reported to the National Office for analysis. These actions, if effectively implemented, will meet the intent of our recommendation. In responding to our two recommendations dealing with Form 8862, the Commissioner said that (1) IRS relies on Form 8862 to “identify the type of action to be taken for taxpayers required to recertify” and (2) any modifications to Form 8862 will be made after assessing the results of the recertification process in 1999 and after completion of an ongoing IRS research project on recertification. At the December 3, 1999, meeting, IRS officials confirmed that IRS’ intent is to defer any decision on either discontinuing or modifying Form 8862 until after the assessment and research project referred to by the Commissioner are completed. We believe that it would be useful to await the results of IRS’ assessment and research project before deciding on changes to the recertification process in general and the use of Form 8862 in particular. We encourage the Commissioner to ensure timely completion of those efforts so that any changes can be implemented in time for the 2001 filing season. We will be checking on the results of IRS’ assessment and research project as part of our review of the 2000 tax filing season. The Commissioner also noted that redesigning Form 8862 to include references to documentation that might be needed for recertification may be counterproductive to IRS’ efforts to reduce taxpayer burden. He explained that such a change, for example, could cause some taxpayers to submit unnecessary documentation with their returns. We agree with the Commissioner about the need to reduce taxpayer burden, and that is the intent of our recommendation. We believe that the current process could mislead taxpayers into believing that the information they provide on Form 8862 will be sufficient for recertification and that their refund is being processed. When taxpayers subsequently receive the notice that their refund is being delayed and that additional documents are necessary for recertification, they may feel burdened by the delayed refund and by the fact that IRS waited until after they filed to tell them what information they had to provide to prove their eligibility for the EIC. We also believe that IRS can mitigate any risk that adoption of our recommendation will cause some taxpayers to provide unnecessary documentation by making it clear, in the notice sent to taxpayers, when submission of the documentation is required. IRS agreed with our recommendation that all service centers implement the recertification procedures according to the national guidelines. According to the Commissioner, (1) the guidelines have been incorporated into the Internal Revenue Manual, (2) adherence to procedures in the manual is mandatory, and (3) special reviews will be done during fiscal year 2000 to assess conformance to the procedures. IRS also agreed with our recommendation that it analyze the results of the refund timeliness tests to determine why some refunds took longer than 40 days to issue and what the test results showed for returns that were filed error-free. The Commissioner said that IRS will be doing an initial analysis that will provide some of the information called for in our recommendation. Depending on the results of that analysis, which is to be completed by February 1, 2000, IRS said that it might conduct a more extensive analysis. We will be following up on the results of IRS’ analysis as part of our assessment of the 2000 tax filing season. IRS also provided various technical comments, which we incorporated in the body of this report where appropriate.
Pursuant to a congressional request, the GAO discussed the Internal Revenue Service's (IRS) performance during the 1999 tax filing season, focusing on: (1) telephone service; (2) availability of walk-in services; (3) other taxpayer service efforts; (4) Earned Income Credit (EIC) noncompliance; (5) electronic filing; (6) implementation of recent tax law changes; and (7) implementation of a new return and remittance processing system. GAO noted that: (1) IRS met or exceeded its 1999 goals for several performance measures, but fell short of its goals in two key areas - taxpayers' ability to access IRS' toll-free telephone service and the quality of IRS' responses to taxpayers tax law questions; (2) certain features of IRS' methodology for measuring the quality of responses to tax law questions also warranted IRS' attention; (3) the timeliness of refunds for paper returns raised some questions about IRS' timeliness that IRS could not answer; (4) IRS attempted to improve telephone service in 1999, however, service did not improve, but deteriorated; (5) GAO's work indicated this deterioration resulted from (a) unrealistic assumptions about the implementation and impact of IRS' changes, and (b) other problems managing staff training and scheduling and implementing new technology; (6) IRS enhanced the availability of its walk-in services by increasing Saturday hours and making services more accessible to taxpayers who did not have convenient access to a walk-in office; (7) IRS did a better job of measuring walk-in customer satisfaction in 1999 than 1998; (8) however, IRS made little progress in measuring the quality and timeliness of its walk-in services; (9) use of IRS' World Wide Web site on the Internet increased significantly during the 1999 filing season, but IRS data pointed to problems with taxpayers getting answers to tax law questions via electronic mail; (10) IRS stopped millions of dollars in erroneous EIC claims in 1999 by validating social security numbers and scrutinizing certain claims; (11) IRS implemented new procedures (called recertification) in 1999 that require certain taxpayers to document their eligibility for the EIC before IRS approves their claim; (12) GAO identified certain opportunities to streamline the recertification process and thus make it less burdensome to taxpayers and IRS; (13) IRS service centers were not consistently following the national guidelines for recertification, which could result in disparate treatment of taxpayers; (14) IRS implemented several initiatives in 1999 directed at making electronic filing paperless and thus more appealing to potential users; (15) 20 percent of the returns filed in 1999 included the new child tax credit; (16) many of those taxpayers erred in calculating the credit amount, while others who were eligible for the credit failed to claim it; (17) correction of these errors increased IRS' processing workload; (18) IRS made significant changes to the computer systems it uses to process returns and remittances; and (19) IRS accomplished those changes without any discernible processing disruptions.
The current U.S. bank risk-based capital regulations implement the 1988 Basel Accord on risk-based capital. The Basel Accord established the widespread use of capital ratios that bank and thrift regulators could use as a starting point for assessing the financial condition—that is, safety and soundness—of internationally active banks and thrifts. In the United States, U.S. bank regulators applied the Basel Accord to all banks, rather than just internationally active ones. In 1991, GAO recommended a tripwire approach—incorporating capital and safety and soundness standards, or levels at which supervisory actions would be triggered—based on our findings that regulatory discretion and a common philosophy of trying to resolve the problems of troubled institutions informally and cooperatively resulted in enforcement actions that were neither timely nor forceful enough to prevent or minimize losses to the deposit insurance fund. Moreover, acting in response to the large number of bank and thrift failures in the late 1980s and early 1990s, Congress enacted the Federal Deposit Insurance Corporation Improvement Act of 1991 (FDICIA), which included a capital-based regulatory structure known as PCA. Specifically, FDICIA categorizes depository institutions into five classifications on the basis of their capital levels and imposes increasingly more severe restrictions and supervisory actions as an institution’s capital level deteriorates. CUMAA required NCUA to adopt a system of PCA comparable with that of FDICIA for use on federally insured credit unions, which NCUA initially implemented in 2000. CUMAA defined the net worth ratio for PCA purposes as net worth to total assets. Under CUMAA, net worth is defined as the retained earnings balance of the credit union at quarter end, as determined under generally accepted accounting principles (GAAP). NCUA regulations provide four alternative methods that credit unions can use to calculate total assets for use in the net worth ratio: (1) average of quarter-end balances of the current and three preceding calendar quarters, (2) average of month-end balances over the three calendar months of the calendar quarter, (3) average daily balance over the calendar quarter, or (4) quarter-end balance of the calendar quarter as reported on the credit union’s call report. NCUA regulations state that for each quarter, a credit union must elect a measure of total assets from these four alternatives to apply for all PCA purposes, except for the risk-based net worth requirement. CUMAA prescribes three principal components of the PCA system for credit unions: (1) a comprehensive framework of actions, including actions prescribed by statute and discretionary actions to be developed by NCUA, for credit unions that are less than well-capitalized; (2) an alternative system of PCA to be developed for credit unions that NCUA defines as “new”; and (3) a risk-based net worth requirement to apply to credit unions that NCUA defines as “complex.” Table 1 summarizes the PCA capital requirements for regular and complex credit unions. CUMAA imposes up to four mandatory supervisory actions—an earnings transfer, submission of an acceptable net worth restoration plan, a restriction on asset growth, and a restriction on member business lending—depending on a credit union’s capital classification, as determined by net worth ratios. Credit unions that are not well-capitalized are required to take an earnings transfer. Credit unions that are undercapitalized, significantly undercapitalized, or critically undercapitalized are subject to all four actions. In addition, CUMAA requires NCUA to appoint a conservator or liquidation agent within 90 days of a credit union becoming critically undercapitalized unless the NCUA Board of Directors determines that other action would better achieve PCA’s purpose. Pursuant to CUMAA, NCUA also developed discretionary supervisory actions, such as the dismissal of officers or directors of an undercapitalized credit union, to complement the prescribed actions under the PCA program. While CUMAA required NCUA to implement a system of capital-based tripwires, capital-based safeguards of insurance funds are inherently limited because capital does not typically show a decline until an institution has experienced substantial deterioration in other components of its operations and finances. Deterioration in an institution’s internal controls, asset quality, and earnings can occur years before capital is adversely affected. Financial regulators recognize that, though essential, a capital requirement is only one of a larger set of prudential tools used to protect customers and ensure the stability of financial markets they regulate. For depository institutions, the key or critical tool that financial regulators use to ensure the adequacy of an institution’s capital levels and its safety and soundness is the on-site examination process. The credit union industry’s recent interest in using alternative forms of capital appears to be associated primarily with three concerns about PCA for credit unions. First, several credit union officials argued that secondary capital or other alternatives were needed, given concerns that credit unions might trigger PCA restrictions because of rapid inflows of deposits due to investors’ “flight to safety”; however, we have not found widespread evidence to support these concerns. To assist credit unions that fall marginally below “adequately capitalized” primarily because asset growth has outstripped income growth, NCUA proposed the use of an abbreviated net worth restoration (NWRP) plan. According to an NCUA official, the proposed rule was not pursued further because it was considered too complicated, would only benefit a very small number of credit unions, and did not appear to provide material relief. Second, other credit union officials contended that PCA acts as a restraint on credit union growth. Our analysis of credit union and bank data indicates that credit unions have been growing faster than banks in the 3 years credit union PCA has been in effect. Finally, several credit union officials are concerned that the PCA tripwires for credit unions are too high, given the conservative risk profile of most credit unions. It should be noted that, according to Treasury, Congress established the capital level 2 percentage points higher because 1 percent of a credit union’s capital is deposited in NCUSIF and another 1 percent of the typical credit union’s capital is invested in a corporate credit union. As investors sought high-quality (that is, safe) investments due to weak performance by the stock and other investment markets in the early 2000s, credit unions experienced significant growth in member share deposits. Several credit union industry officials expressed concern that this inflow of new shares into credit unions might dilute net worth ratios, thus triggering net worth restoration plans and other supervisory actions under PCA. To assist credit unions that fall marginally below “adequately capitalized” primarily because asset growth outstrips income growth, NCUA introduced the concept of an abbreviated NWRP in June 2002. While no specific proposal was introduced, the NCUA board invited public comment on the concept of what was then referred to as “safe harbor” approval of a NWRP—that is, notice of certain criteria established by regulation that, when met, will ensure approval. In November 2002, NCUA put forth a proposed rule and request for public comment on allowing the use of abbreviated NWRP—which NCUA referred to as a first-tier NWRP—by qualifying federally insured credit unions whose net worth ratio declined marginally below the adequately capitalized threshold (6 percent) because growth in assets outpaced growth in net worth. Under the proposal, a credit union would have been eligible to file an abbreviated NWRP if it satisfied three criteria: historical net worth, performance, and growth. There were three principal differences between the content requirement of a standard NWRP and the abbreviated NWRP proposed by NCUA. First, the proposed abbreviated NWRP would require only 4 quarters of pro forma projections of total assets, shares and deposits, and return on average assets, while the standard NWRP required complete pro forma financial statements covering a minimum of 2 years. Second, the abbreviated NWRP would not require a credit union to specify what steps it would take to meet its schedule of quarterly net worth targets, which is required for a standard NWRP. Finally, a standard NWRP requires those steps to extend beyond the term of the plan to ensure that the credit union remains at least adequately capitalized for 4 consecutive quarters thereafter. In contrast, the proposed abbreviated NWRP did not address the credit union’s net worth after the end of the term of the plan. NCUA’s proposed rule also detailed the criteria for approval of the abbreviated NWRP and the circumstances in which a credit union that would otherwise be eligible to file an abbreviated NWRP would have been required to file a standard NWRP instead. According to an NCUA official, the proposed rule was not pursued further because it was considered too complicated, would benefit only a very small number of credit unions, and did not appear it would provide material relief since some form of NWRP (albeit somewhat abbreviated) was still required by statute. NCUA officials stated that the credit union industry supported the proposal for an abbreviated NWRP but the credit union industry was advocating a proposal that would be automatically approved if it met a fixed set of objective criteria. However, NCUA officials explained that CUMAA requires a case-by-case determination by NCUA that a plan “is based on realistic assumptions and is likely to succeed in restoring the net worth of the credit union.” Although NCUA’s proposal to assist certain credit unions that fall marginally below “adequately capitalized” was not pursued further, we found that despite a recent inflow of member share deposits, the credit union industry as a whole has been able to maintain net worth ratios well above the PCA threshold for well-capitalized credit unions. Moreover, current data suggest that the “flight to safety” may be over, as investors appear to be returning to the investment markets. Figure 1 illustrates that during the period that PCA has been in place for credit unions (2001–2003), the net worth ratios for federally insured credit unions dropped somewhat initially but stabilized at the close of 2003. Groups such as the National Association of State Credit Union Supervisors (NASCUS) and several credit union chief executive officers (CEO) told us that that the combination of PCA requirements and members’ flight to safety from the markets could force both fast-growing credit unions and small to midsize credit unions to choose between (1) refusing deposits, (2) reducing services to members in order to retard the growth of assets, (3) converting to a savings and loan or community bank, or (4) merging with another credit union. While some of the larger credit union CEOs with whom we spoke stated that PCA is not causing capital constraints currently, they told us the potential exists for share growth to outstrip their ability to retain earnings, thus triggering net worth restoration plans and other supervisory actions under PCA. On the other hand, according to some CEOs of small and midsize credit unions, these constraints are affecting them currently. While the constraints noted above may have occurred to some extent in a limited number of credit unions, we did not find evidence of widespread net worth problems for federally insured credit unions during the period PCA has been in place. Moreover, as of December 2003, less than 3 percent of federally insured credit unions have reported a net worth ratio below the well-capitalized threshold. Some credit union industry officials have indicated that the current credit union PCA system acts as a restraint on credit union growth, because any additional new member shares (deposits) would increase their assets and correspondingly reduce their net worth ratios. While most credit unions have been well-capitalized during the period that PCA has been in place, some industry officials have suggested that the capital constraints it imposes will become increasingly difficult to manage, forcing credit unions to turn away deposits so as not to dilute or decrease their net worth ratios. It should be noted that PCA was intended to curb aggressive growth, since uncontrolled growth was one of the common attributes of thrifts and banks that failed during the banking crisis of the late 1980s and early 1990s. Credit union industry officials, including NCUA, have stated that some credit unions have had to reduce their services to members in an effort to satisfy PCA requirements. NCUA officials told us credit unions that have decreased services to their members have done so as part of net worth restoration plans. However, NCUA officials told us they would have no way of determining the number of credit unions considering decreasing services in an effort to prevent being subject to regulatory actions by NCUA. We have not found any evidence that federally insured credit unions are limiting their services to accommodate a rapidly growing deposit base. Moreover, active asset management is a major component of the operations of any financial institution. Credit union managers are expected to manage the growth of their institutions so that an influx of member deposits would not cause the credit union to become subject to PCA. Despite the concerns about PCA acting as a constraint against asset growth, credit unions have grown at a higher rate than banks and thrifts during the period that PCA has been in place for credit unions (see fig. 2). This was particularly the case in 2001, the first full calendar year in which PCA was in place for credit unions. In that year, credit unions achieved an asset growth rate of more than 14 percent, compared with an approximate growth rate of 6 percent for other depository institutions. The disparity in growth rates narrowed in 2002 and 2003. The credit union industry has consistently criticized PCA triggers (that is, capital thresholds) as being too high. Some credit union officials have noted that PCA encourages credit union managers to hold more capital than is necessary, which does not allow them to maximize shareholder value. In addition, they said that PCA tripwires for credit unions are higher than those of banks and thrifts despite the more conservative risk profile of credit unions. Banks and thrifts are required to meet two capital requirements in order to be adequately capitalized: (1) a minimum tier 1 leverage ratio—that is a minimum ratio of total capital to total assets, which is generally 4 percent of tier 1 capital; and (2) a risk-based capital ratio of 8 percent capital to risk-weighted assets. Under CUMAA’s net worth requirements, federally insured credit unions must maintain at least 6 percent net worth to total assets to be considered adequately capitalized. This exceeds the 4 percent tier 1 leverage ratio applicable for banks and thrifts (and is statutory, as opposed to regulatory). In its 2001 report, Treasury stated that Congress determined that a higher ratio was appropriate because credit unions cannot quickly issue capital stock to raise their net worth as soon as a financial need arises. Instead, credit unions must rely on retained earnings to build net worth, which necessarily takes time. Moreover, Treasury stated that Congress established the capital level 2 percentage points higher, a level recommended by Treasury in its 1997 report on credit unions, because 1 percent of a credit union’s capital is deposited in NCUSIF and another 1 percent of the typical credit union’s capital is invested in a corporate credit union. Effective July 3, 2003, a federally insured credit union is allowed to invest up to 2 percent of its assets in any one corporate credit union and, in the aggregate, up to 4 percent of its assets in multiple corporate credit unions. Though some in the credit union industry seek use of alternative forms of capital, little information exists that would allow us to assess the implications of using these instruments. We found that the credit union industry lacks consensus on the desirability of these instruments, with one of the key issues in the current debate over secondary capital centered on who would purchase these instruments and their resulting impact on the unique nature of credit unions—member-owned, not-for-profit cooperatives. Also, we could not identify a definitive proposal that specifically addressed other critical issues relating to the use of secondary capital instruments, such as pricing and market demand. While low income credit unions are allowed to use secondary capital instruments and corporate credit unions are allowed to use secondary capital instruments and count it toward their net worth requirements, their experiences are too narrow to offer insight into the value of such an instrument for all federally insured credit unions. However, one industry group has developed a list of principles, or minimum set of criteria, to consider for any proposal. The credit union industry is divided on the merits and potential effects of using alternative capital. Credit union industry officials have expressed concerns that credit unions may find their rate of share (deposit) growth exceeding their ability to accumulate retained earnings, triggering net worth restoration plans and other supervisory actions under PCA. According to one trade association, the Credit Union National Association (CUNA), building net worth through earnings retention is a time- consuming process, and being able to use alternative capital instruments would allow a credit union to quickly build its capital levels. Additionally, some credit union officials believe that the current credit union capital system encourages managers to overcapitalize their credit unions (that is, hold excessive capital), which is not always the best alternative for financial institutions. Some officials have stated that secondary capital would allow credit union managers the flexibility to be more proactive in managing their capital. One credit union CEO, whose institution is one of the largest federally insured credit unions, stated that three of the five largest federally chartered credit unions were against allowing credit unions to acquire secondary capital. He countered arguments for changing PCA by citing his credit union’s experience with a dramatic influx in shares 2 years ago. He noted the influx did not trigger PCA because his institution’s capital was aggressively managed. The CEO added that the dividends paid to the credit union’s members, along with other services, were not limited or reduced as a result of this aggressive management. He explained that the excess capital (which was built over time through returns on investments at higher interest rates) in concert with diligent capital management kept the credit union from triggering PCA. Debate over secondary capital centers around who should be allowed to purchase these instruments. Some in the credit union industry argue that allowing outsiders to invest in the credit union industry would increase market discipline, but there are concerns that outside investment would be more costly and change the structure of the credit union industry. Opponents of secondary capital suggest that allowing voting, or even nonvoting, secondary capital from investors outside of the credit union industry would dilute the ownership structure of credit unions—not-for- profit, member-owned cooperatives. For example, one credit union CEO asserts that secondary capital would allow outside investors “a place at the table,” whether the subordinated debt instruments carry voting or nonvoting rights. He explained that the outside investors could demand returns on investments through changes in interest rates or another form of return, or a right of first refusal if the credit union should ever adopt a for- profit model. Other credit union managers, including those in favor of secondary capital, told us that if done carelessly, secondary capital for credit unions could be disastrous; however, they will continue to promote the use of secondary capital provided it does not change the credit union’s ownership rights. To alleviate these concerns, others suggest allowing investors from within the industry (in-system investors). This approach, however, raises concerns about investor protection and other systemic risks. Moreover, in- system investors could impose less discipline than out-of-system investors. According to one academic expert, the credit union industry is divided on the topic of alternative capital; the academic stated that at least 55 percent of credit unions want to avoid the capital markets, while the remainder would be more open to entering the capital markets and become increasingly banklike. He cautioned that alternative capital should not be used to sustain credit unions that were not already solvent. He explained that secondary capital from investors within the credit union system—that is, credit union members and other credit unions—might introduce systemic risk, wherein the risks of the issuing credit union were inherently spread to the credit union holding the debt instrument. For example, if Credit Union A purchased subordinated debt from Credit Union B and Credit Union B failed and was forced to liquidate its assets, Credit Union A would then be financially affected, possibly resulting in two failed credit unions. Additionally, some officials in the credit union industry suggested that with appropriate disclosure, individual credit union members could invest in secondary capital instruments offered by credit unions. However, even with these disclosures (recognizing that alternative capital instruments are uninsured, nonvoting, and subordinated to other shares), it is possible that credit union members may not fully understand and appreciate the subordinated nature of their investments. We identified one proposal and one academic study that suggest how secondary capital could be utilized by all federally insured credit unions; however, these lacked sufficient detail and did not address critical issues. Specifically, the proposal and academic study did not address the specific form of the capital instruments, criteria governing its issuance (including how it would be incorporated into the regulatory net worth requirement for credit unions), market viability and demand (including in-system or out-of- system investors), and pricing analysis to effectively discuss its potential benefits and implications. As a result of the lack of detail, we were unable to fully assess the issues associated with the potential use of secondary capital by all credit unions. The secondary capital proposal—“Capital Notes”—was developed by the CUNA Mutual Group, a company that offers health insurance and financial services to credit unions. CUNA Mutual Group believes the Capital Notes program, slated for two phases, could help credit unions meet their capital needs. CUNA Mutual Group is piloting this secondary capital mechanism to low income credit unions, which are already permitted under NCUA regulations to count secondary capital toward their PCA requirements. The Capital Notes program allows low income credit unions to issue unrated subordinated debt in a private placement with flexible terms and rates. CUNA Mutual Group purchases the notes issued by the low income credit unions to hold in its investment portfolio. According to CUNA Mutual Group, if the PCA definition of net worth is changed to include secondary capital, the subsequent planned phase of the Capital Notes program will allow all federally insured credit unions to issue unrated, unsecured notes that would be purchased by a trust. The trust would then go through a ratings process and issue its own notes that institutional investors such as corporate credit unions, CUNA Mutual Group, and other insurance companies could purchase. CUNA Mutual Group representatives stated that corporate credit unions would then purchase the highest-rated notes and CUNA Mutual Group, or other insurance companies, would most likely hold the lower-rated or first-loss notes. According to CUNA Mutual Group, the advantages of its Capital Notes program are that it allows fast-growing and low-capitalized credit unions to secure provides additional protection to NCUSIF, the share insurance fund; allows credit unions access to capital sources already available to other depository institutions, such as banks; maintains members’ governance rights; and avoids potential abuses in sales of the notes by restricting purchasers to qualified (institutional) investors. Because the Capital Notes program began its pilot phase in December 2003, insufficient time has passed to allow for an assessment of the effectiveness of the program for low income credit unions. In addition, the motivation of secondary capital investors in low income credit unions is likely significantly different from that of investors in other federally insured credit unions. Consequently, the pricing analysis, market viability, and demand (in-system as well as out-of-system) of the first phase of Capital Notes may not be applicable to the proposed second phase of the program. We identified an academic study regarding the potential use of alternative capital instruments by credit unions. This study, issued by the Filene Research Institute and the Center for Credit Union Research, concluded that allowing credit unions to sell subordinated debt to parties outside of the credit union industry to meet their capital requirements could provide the following advantages: In terms of market discipline, the higher interest costs associated with debt of riskier credit unions would reduce the temptation of excessive risk taking by credit union managers and would send a forward-looking signal to regulators if credit unions’ risk taking increased. In terms of transparency and disclosure, marketing of subordinated debt, directly or via a pool arrangement, would require increased transparency and disclosure about the condition of credit unions. In terms of maintaining a larger cushion for the share insurance fund, the holders of subordinated debt would be compensated only after NCUSIF was fully compensated out of sales of existing assets, thereby reducing the risk to the insurance fund. In terms of increasing the incentives for prompt action by supervisors, holders of subordinated debt would encourage regulators to act promptly if credit unions became excessively risky or troubled. However, while presenting a framework for using secondary capital, the authors of the study did not provide a specific proposal. In addition, they did not address market demand for secondary capital, pricing or the ultimate cost of these instruments to credit unions or assess the impact of the external subordinated debt holders on the member-owned and member- operated structure of credit unions. NCUA first authorized the issuance of secondary capital instruments by low income credit unions in 1996. According to NCUA, it granted the authority in recognition of the special needs of these credit unions to raise capital from sources outside of their low income communities. Under NCUA regulations, credit unions with a low income designation can (1) receive nonnatural person, nonmember deposits that are not NCUSIF- insured; (2) offer uninsured secondary capital accounts and include these accounts on the credit union’s balance sheet for accounting purposes; and (3) include these secondary capital accounts in the credit union’s net worth for PCA purposes. However, investment in low income credit unions does not offer a template for the industry because the motivations of secondary capital investors in low income credit unions may be different from investors in other federally insured credit unions. For example, banks may obtain credit under the Community Reinvestment Act (CRA) for their investment in low income credit unions. In addition, many foundations and philanthropic organizations also are involved in providing secondary capital to low income credit unions in an effort to ensure that the credit unions are able to provide needed financial services to areas traditionally underserved by mainstream financial institutions. Moreover, as of December 31, 2003, less than 6 percent of all low income credit unions had secondary capital accounts. Additionally, low income credit unions that had secondary capital accounts represented less than 1 percent of all federally insured credit unions. Thus, in addition to the different incentives for investment, the limited experience of low income credit unions with secondary capital instruments also provides little insight into the potential market demand and pricing of secondary capital instruments for all federally insured credit unions. Corporate credit unions—whose members are credit unions, not individuals—also can issue forms of secondary capital. According to NCUA, corporate credit unions have been allowed to use secondary capital instruments to meet their regulatory capital requirements since 1992 in recognition that the ability of corporate credit unions to build capital is limited by the combined effects of (1) conservative investment standards imposed by NCUA and (2) the competitive markets in which corporate credit unions vie for credit unions’ investment funds. Capital for corporate credit unions is defined as the sum of a corporate credit union’s retained earnings, paid-in capital (both member and nonmember), and membership capital. NCUA refers to this paid-in capital and membership capital as corporate credit union secondary capital; among other things, these two types of capital are not insured by NCUSIF and are generally longer-term investments. As of December 31, 2003, 18 out of all 31 corporate credit unions had member paid-in capital accounts, 30 out of 31 had membership capital accounts, and none had nonmember paid-in capital accounts. However, taking into account that (1) corporate credit unions and natural person credit unions are not comparable given their member base, and (2) there are far fewer corporate credit unions compared with the total number of federally insured credit unions, those 18 corporate credit unions with member paid-in capital and 30 with membership capital do not provide a representative or sufficient sample that can be used as a model to demonstrate how secondary capital could be used for all federally insured credit unions. Thus, the limited experience of corporate credit unions with member paid-in capital, coupled with the lack of experience with nonmember capital sources, provides little insight into the potential demand and pricing of secondary capital instruments for all federally insured credit unions. The credit union industry as a whole has neither endorsed secondary capital nor put forth a specific secondary capital proposal; however, several officials with whom we spoke referred to the principles of the National Association of Federal Credit Unions (NAFCU) board for the development of a secondary capital instrument as a set of criteria to consider. Listed in table 2 are the NAFCU board’s principles recommended for any secondary capital instrument designed for use by all federally insured credit unions. While we believe that this list incorporates key factors that should be considered for an alternative capital proposal, it should be noted that this is not an exhaustive list of all the possible concerns that may develop as a result of allowing all federally insured credit unions the use of alternative capital instruments. NAFCU officials told us that they have not been able to produce an alternative capital proposal that satisfies these seven principles because of some of the inherent tensions among the principles. For example, were alternative capital issued only within the credit union system, the number of investors would be more limited than if it were issued to the general public, suggesting that a viable alternative capital instrument should be issued in the markets—that is, outside of the credit union system. However, issuing alternative capital instruments outside of the credit union system may create another “class” of owners, thereby changing the nature of credit unions. The debate about the potential use of risk-based capital for all credit unions revolves around key structural issues, including (1) the extent to which risk-based ratios would be used to augment, versus replace, the current PCA net worth (leverage) requirements and (2) how key risk components and weights that are appropriate to the unique characteristics of credit unions would be defined. While all banks and thrifts are required to meet both a risk-based capital ratio and a leverage ratio to be classified as adequately capitalized, most credit unions are required to meet only one—a leverage ratio—to be classified as adequately capitalized. Bank and thrift regulators recognized the limitations of a solely risk-based capital requirement and continued the leverage requirements to address other factors that can affect a bank’s financial condition, which a risk-based ratio does not address. NCUA has adopted a risk-based component of PCA; however, it affects only a small percentage of credit unions—those that meet NCUA’s definition of “complex.” Though a credit union trade association has put forward two risk-based capital proposals, neither has garnered industry consensus. Moreover, each proposal lacked details of key components upon which to base any assessment of their merits. NCUA officials told us they are developing, but have not yet finalized, a risk-based capital proposal to augment current PCA for all credit unions that they believe acknowledges the unique nature of credit unions and incorporates the relevant and material risks credit unions face. FDICIA requires all banks and thrifts to meet both a risk-based and a leverage requirement. Leverage ratios have been part of bank regulatory requirements since the 1980s. They were continued after the introduction of risk-based capital requirements as a cushion against risks not explicitly covered in the risk-based capital requirements. According to regulatory guidelines on capital adequacy, the final supervisory judgment of a bank’s capital adequacy may differ from the conclusions that might be drawn solely from the risk-based capital ratio. Banking regulators recognized that the risk-based capital ratio does not incorporate other factors that can affect a bank’s financial condition, such as interest-rate exposure, liquidity risks, the quality of loans and investments, and management’s overall ability to monitor and control financial and operating risks. FDICIA also requires bank regulators to monitor other risks, such as interest-rate and concentration risks. FDICIA requires the federal bank and thrift regulators to establish criteria for classifying depository institutions into five capital categories: well- capitalized, adequately capitalized, undercapitalized, significantly undercapitalized, and critically undercapitalized. Figure 3 illustrates four capital categories and ratio requirements of FDICIA’s PCA provisions. Although not shown in figure 3, a fourth ratio—tangible equity—is used to categorize an institution as critically undercapitalized. Any institution that has a 2 percent or less tangible equity ratio is considered critically undercapitalized, regardless of its other capital ratios. The amount of capital held by a bank is to be greater than or equal to the leverage ratio. However, if the risk-based capital calculation yields a higher capital requirement, the higher amount is the minimum level required. Although U.S. bank risk-based capital guidelines address several types of risk, only credit and market risk are explicitly quantified. The quantified risk-based capital standard is defined in terms of a ratio of qualifying capital divided by risk-weighted assets. All banks are required to calculate their credit risk for assets, such as loans and securities; and off-balance sheet items, such as derivatives or letters of credit. There are two qualifying capital components in the risk-based credit risk computation— core capital (tier 1) and supplementary capital (tier 2). In addition to credit risk, banks with significant market risk exposures are required to calculate a risk-based capital ratio that takes into account market risk in positions such as securities and derivatives in an institution’s trading account and all foreign exchange and commodity positions, wherever they are located in the bank. The market-risk capital ratio augments the definitions of qualifying capital in the credit risk requirement by adding an additional capital component (tier 3). Tier 3 capital is unsecured, subordinated debt that is fully paid up, has an original maturity of at least 2 years, and is redeemable before maturity only with approval by the regulator. To be included in the definition of tier 3 capital, the subordinated debt must include a lock-in clause precluding payment of either interest or principal (even at maturity) if the payment would cause the issuing bank’s risk-based capital ratio to fall or remain below the minimum requirement. NCUA’s PCA risk-based capital rule currently applies to relatively few credit unions—approximately 8 percent of all federally insured credit unions that were designated as “complex” as of December 31, 2003. It should be noted that none of the five largest credit unions, and only one of the top 10 credit unions in terms of assets, met NCUA’s definition of complex. CUMAA mandated a risk-based net worth requirement for “complex” credit unions, for which NCUA was required to formulate a definition according to the risk level of the credit union’s portfolios of assets and liabilities. These credit unions are subject to an additional risk- based net worth requirement to compensate for material risks, against which a 6 percent net worth ratio may not provide adequate protection. Specifically, the risk-based net worth calculation measures the risk level of on- and off-balance sheet items in the credit union’s “risk portfolios.” NCUA uses two methods to determine whether a complex credit union meets its risk-based net worth requirement: (1) a “standard calculation,” which uses specific standard component amounts; and (2) a calculation using alternative component amounts. A credit union’s risk-based net worth requirement is the sum of eight standard components, which include such items as unused member business loan commitments and allowance for loan and lease losses. Appendix II provides an example of the standard calculation of the risk-based net worth requirement, including the definitions of the risk portfolios and weighted average life for investments. Although not shown in appendix II, the alternative method of calculating the risk-based requirement involves weighting four of the risk portfolio components—long-term real estate loans, member business loans, investments, and loans sold with recourse—according to their remaining maturity, weighted average life, and weighted average recourse, respectively. In addition, the risk-based net worth requirement allows credit unions that succeed in demonstrating mitigation of interest-rate or credit risk to apply to NCUA for a risk mitigation credit. The credit, if approved, would reduce the risk-based net worth requirement a credit union must satisfy to remain classified as adequately capitalized or above. According to NCUA, between March 2002 and December 2003 there have been 38 credit unions that failed the standard risk-based net worth requirement, with two credit unions failing both the standard and alternative calculation requirements. In addition, toward the end of 2003 two credit unions submitted applications for a risk mitigation credit. The credit union officials with whom we spoke disagreed whether the current PCA system should be replaced or augmented by a risk-based PCA system. One credit union official—a recognized proponent of secondary capital—told us that risk-based capital should be used to augment, but not replace, the current leverage-based net worth capital requirements. Conversely, two industry groups told us that they see risk-based capital requirements serving as a complement to secondary capital, if it were allowed to be included as a component of net worth. Many credit union officials told us that current PCA is “one size fits all” but would not comment further on risk-based capital. In addition, NASCUS told us that it has recently endorsed the risk-based language in a House of Representatives bill, although it continues to support secondary capital for all credit unions. However, it should be noted that for most credit unions, risk-based assets are less than total assets; therefore, a given amount of capital would have a higher net worth ratio if risk-based assets were used. And capital requirements would likely be reduced if risk-based capital were an alternative, rather than a complement, to leverage ratios. CUNA put forward two risk-based capital proposals that they believe (1) would preserve the requirement that regulators must take prompt and forceful supervisory actions against credit unions that become seriously undercapitalized and (2) would not encourage well-capitalized credit unions to establish such large buffers over minimum net worth requirements that they would become overcapitalized. However, both proposals lacked details of key components that would be needed in order to assess their merits. The first CUNA proposal does not provide a clear definition of risk assets. The second CUNA proposal does not provide specific risk weights and asset classifications appropriate for credit unions. The first proposal would replace the current two-phased PCA system with a single system using risk-based and net worth ratio requirements for all credit unions. This system would incorporate NCUA’s pre-CUMAA definition of risk assets—all loans not guaranteed by the federal government, and all investments with maturities over 5 years—into the PCA system by modifying the current definition of net worth ratio. Specifically, the first proposal would lower the current net worth ratios for each PCA category to parallel the leverage ratio requirement for banks and thrifts and add a risk-based net worth ratio requirement using the existing PCA threshold levels for credit unions. For example, an adequately capitalized credit union would be defined as having a risk-based net worth ratio of 6 percent or greater and a net worth ratio of 4 percent or greater. Under this proposal, if a credit union’s net worth ratio falls into different categories by risk and total assets, the lower classification would apply. The proposal stated that risk assets could be defined as nonguaranteed loans and long-term investments, or NCUA could be instructed to define risk assets in a manner consistent with its pre-CUMAA requirements. The second proposal would incorporate components of both the Basel capital framework currently in use by banks and thrifts in the United States and the risk-based portion of the current credit union PCA applicable to complex credit unions. Specifically, this proposal states that net worth requirements could be based on risk weights for assets as in place for banks, but with the weights established on the basis of both credit and interest-rate risk. Under this proposal, the risk weights could be set by NCUA based on the Basel system. According to the second proposal, it is likely that NCUA could choose to adopt some credit-risk weights that are different from those currently in use by bank and thrift regulators under the Basel system because some of the weights would be assigned on the basis of interest-rate risk. The proposed risk-based ratio requirements for each PCA category would parallel the current total risk-based requirement for banks and thrifts. In addition, this proposal states that a credit union could also be required to maintain a net worth ratio equivalent to the leverage ratio required for banks and thrifts. Similar to the first proposal, if a credit union’s net worth ratio falls into different categories by risk and total assets, the lower classification would apply. For example, in order to be adequately capitalized under the second proposal, a credit union would have to have a risk-based ratio of 8 percent or greater and a net worth ratio of 4 percent or greater. According to NCUA officials, NCUA envisions a risk-based PCA system similar in structure to that currently employed in the banking system. However, they stated that NCUA would tailor the risk weights and the categories into which assets fall, to take into consideration the unique nature of credit unions and the loss histories of their asset portfolios. In addition, the NCUA officials told us that a risk-based credit union PCA system should be designed to address all relevant and material risks (for example, interest-rate risk). According to these NCUA officials, the credit union PCA system should be robust enough so as not to be “one-size-fits- all,” but simple enough to facilitate administration of the system and be well understood by credit unions. NCUA officials told us that they are in the process of developing a risk-based PCA proposal that would be used for all credit unions, not just complex credit unions. See appendix III for items being used in the development of NCUA’s risk-based PCA proposal. NCUA officials emphasized that the CUMAA mandate to take prompt corrective action to resolve problems at the least long-term cost to NCUSIF is good public policy and consistent with NCUA’s fiduciary responsibility to the share insurance fund. However, they stated that they believe additional flexibility is needed to enable NCUA to work with problem institutions. They explained that the additional flexibility could be structured to constrain any tendency toward regulatory forbearance and preserve the objective of PCA. NCUA officials told us that they believe a revised system would alleviate most concerns that credit unions have with PCA. They believe changing the system would provide credit union management with the ability to manage compliance by making adjustments to their asset portfolios, maintain ample protection for the system and individual credit unions, and preserve NCUA’s ability to address net worth problems. NCUA officials told us that such a system would likely obviate the need or desire for secondary capital for the vast majority of credit unions. Despite concerns raised by some in the credit union industry, available information indicates no compelling need for using secondary capital instruments to bolster the net worth of credit unions, or to make other significant changes to PCA as it has been implemented for credit unions. Available indicators suggest that the credit union industry as a whole has not been overly constrained as a result of the implementation of PCA. Notably, credit unions were able to maintain capital levels well in excess of the PCA requirements during a period of rapid share or deposit growth. One of the inherent weaknesses in PCA is its focus on capital, which typically is a lagging indicator of a financial institution’s health. As such, it will be important for NCUA to distinguish between capital deterioration that occurs because of fundamental weaknesses in the institution’s structure or management versus temporary capital shortfalls due to constraints beyond a credit union’s control. While we do not find the arguments for using secondary capital instruments to be compelling, to the extent that well-managed and -operated credit unions do experience temporary capital constraints, NCUA may want to revisit the concept of an abbreviated net worth restoration plan for marginally undercapitalized credit unions. Consideration of changes such as this seem to be more consistent with the notion that the problems some credit unions may be facing are temporary and, therefore, best tackled with temporary, not more permanent, solutions, such as secondary capital instruments. Allowing credit unions to use secondary capital instruments to meet their regulatory net worth requirements would raise a number of issues and concerns. One of the key issues is who would be allowed to invest in the secondary capital instruments of credit unions. While allowing credit unions to sell secondary capital instruments to investors outside of the credit union industry would provide market discipline, this would raise concerns about the potential impact on the member-owned, cooperative nature of credit unions. Some have proposed limiting potential investors to credit union members, other credit unions, and corporate credit unions; however, in-system investors could impose less discipline and raise systemic risk concerns if it were to create a situation where weaker credit unions brought down stronger credit unions due to secondary capital investments. Other issues relate to the specific form of the capital instruments, and how they would be incorporated into the regulatory net worth requirement for credit unions. The credit union industry itself appeared divided on the desirability or appropriate structure of secondary capital instruments. Conceptually, the potential use of a risk-based capital system for all credit unions appears less controversial. Risk-based capital is intended to require institutions with riskier profiles to hold more capital than institutions with less risky profiles. However, not all of the risks that credit unions face, such as liquidity and operational risk, can be quantified. In recognition of the limitations of risk-based capital systems, the bank and thrift regulators use both risk-based and nonrisk weighted (leverage ratio) capital requirements for PCA purposes. The requirements are used in tandem to better ensure safety and soundness in banks and thrifts. Among the numerous issues that would need to be addressed in a risk-based capital proposal, given the unique nature of credit unions, would be the appropriate risk weights and categories into which assets fall and the appropriate risk-based and nonrisk-based capital ratios for each PCA category. We are aware that NCUA is constructing a more detailed risk-based capital proposal that includes both risk-based and leverage requirements for all credit unions and believe that any proposal should be based on the premise that risk- based capital be used to augment, but not replace, the current net worth requirement for credit unions. We remain a strong supporter of PCA as a regulatory tool. The system of PCA implemented for credit unions is comparable with the PCA system that bank and thrift regulators have used for over a decade. The concerns raised by the credit union industry appear to reflect the inherent tension between credit union managers’ desire to maintain the optimal amount of capital to efficiently fuel growth and returns to credit union members and Congress’s desire to protect the federal share insurance funds from losses that could have been prevented by early and forceful supervisory action. As we stated in our October 2003 report, credit unions have been subject to PCA for a short time, and the advantages and disadvantages of the current program are not yet evident. Additional time and greater experience with the use of PCA in the credit union industry would provide greater insight into the need for any significant changes to PCA as well as the best options for any changes. We provided a draft of this report to the Chairman of the National Credit Union Administration and the Secretary of the Treasury for review and comment. We received written comments from NCUA that are reprinted in appendix IV. In addition, we received technical comments from NCUA and Treasury that we incorporated into this report, as appropriate. NCUA concurred with this report’s assessment that there is no compelling need for secondary capital. For example, NCUA concurred that there are key unresolved issues, such as whether secondary capital instruments would be commercially viable, to whom these instruments could and should be sold (e.g. inside versus outside investors), the effects on the member-owned, cooperative structure of credit unions, and any safety and soundness and systemic risk implications posed by this activity. NCUA also concurred that there is a lack of consensus within the credit union system on the need for and appropriate structure of secondary capital instruments. Finally, NCUA stated that the vast majority of insured credit unions maintain extremely strong capital positions, notwithstanding a recent prolonged period of rapid share growth. NCUA stated that it concurred with views expressed by many within the credit union industry that the current PCA tripwires were too high. NCUA disagreed with Treasury’s rationale for the higher limit—1 percent for the deposit in NCUSIF and another 1 percent for the typical credit union’s capital invested in corporate credit unions—than that imposed on banks and thrifts. NCUA stated that under GAAP, which Congress mandated credit unions follow, the NCUSIF deposit is considered an asset on the financial statements of a credit union. Further, NCUA stated that the NCUSIF deposit is not related to a credit union’s net worth from either an accounting or financial risk standpoint. In addition, NCUA noted that not all credit unions belong to corporate credit unions or hold this form of investment; therefore, using a “one size fits all” approach to trigger PCA supervisory actions based on this assumption is inherently unfair. Finally, NCUA stated that PCA tripwires are too high, penalizes institutions with conservative risk profiles, and allows higher risk earnings strategies without commensurate net worth levels. While we did not perform an evaluation of PCA, which would include a discussion of the thresholds, we note that the NCUSIF deposit is not liquid and, therefore, not immediately accessible for credit unions to use as a capital buffer. Though we agree that not all credit unions are engaged in corporate credit union investments, we believe that these investments are still relevant as a PCA consideration and any risk-based capital standards should appropriately recognize these investments. NCUA stated that based on their experience gained to date with the PCA system for federally insured credit unions, adjustments are needed to better achieve PCA’s overall objectives. Specifically, NCUA stated that the adjustments should move PCA to a more fully risk-based system, with a lower leverage ratio required of a credit union to meet the well-capitalized levels. NCUA believes that a well-capitalized leverage requirement in the range of 5 percent would be more than sufficient to meet the safety and soundness goals of PCA. However, NCUA did not provide evidence that the current 7 percent net worth requirement has been a hardship to the credit union industry. As noted in this report, credit unions cannot quickly raise their capital through the issuance of capital stock when a financial need arises, they must rely on retained earnings to build sufficient capital— which necessarily takes time. Further, we believe that the generally favorable economic climate for credit unions coupled with the relatively short amount of time that PCA has been in place for credit unions do not provide a sufficient testing of the current system of PCA for credit unions to determine if changes are warranted. NCUA stated that it recognized that, as our draft report indicated, the efficacy of a risk-based system is highly dependent on the details of the risk categories and weights, as well as the complementary relationship between the risk-based and leverage requirements. However, NCUA stated that the draft report suggested that a risk-based system would result in risk assets being lower than total assets for most credit unions, resulting in a given amount of capital producing a higher net worth ratio. NCUA stated that such a result was not a foregone conclusion. NCUA indicated that a proposal under consideration included risk categories with weights at and above 100 percent. The statement in the draft report was based on our discussion with representatives of the credit union industry. As we noted in our draft report, no detailed proposals regarding a risk-based system for all credit unions was available for our analysis, including that being developed by NCUA. In the absence of details, we cannot comment on the ultimate effect of a proposal that is in the process of being developed on the required capital levels for credit unions. However, we believe that, used in tandem with leverage capital requirements, any risk-based capital standards should appropriately recognize the risks credit unions face. In response to the statement in our draft report that PCA was intended to act as a restraint on growth, NCUA stated that it was important to differentiate overly aggressive growth from robust growth, consistent with sound business strategy, experienced by healthy credit unions. While we agree that there are different types of growth, institutions still need to hold sufficient capital regardless of the type of growth experienced. As noted in this report, PCA was intended to curb aggressive growth, since uncontrolled growth was one of the common attributes of banks and thrifts that failed during the banking crisis of the late 1980s and early 1990s. Moreover, our analysis of aggregated credit union data indicated that credit unions have been able to maintain a rate of growth that has exceeded that of banks and thrifts in the three full calendar years that PCA has been in place for credit unions. NCUA noted that our draft report suggested that NCUA revisit the concept of an abbreviated NWRP for marginally undercapitalized credit unions for situations involving temporary capital shortfalls. It noted that the statutory language of CUMAA precluded NCUA from providing any significant regulatory relief in this regard. NCUA stated that it supported a statutory change to provide NCUA the regulatory authority to waive the requirement to submit a NWRP for credit unions that have a temporary, marginal drop in their net worth ratio below adequately capitalized, as determined on a case- by-case basis. While NCUA put forth a proposed rule on an abbreviated NWRP, NCUA did not pursue it further. We believe it is important that NCUA explore and use all of the available options and discretion provided by CUMAA. While an abbreviated NWRP may not be viewed by NCUA or the industry as granting significant regulatory relief, the experiences gained with an abbreviated NWRP would provide NCUA and Congress with additional information regarding the need for additional regulatory authorities. Moreover, it is important to note that none of the federal bank or thrift regulators have similar authority to that being sought by NCUA. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from its issuance date. At that time, we will send copies of this report to the Chairman and Ranking Minority Member of the Senate Committee on Banking, Housing, and Urban Affairs. We also will send copies to the National Credit Union Administration and the Department of the Treasury and make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. This report was prepared under the direction of Harry Medina, Assistant Director. If you or your staffs have any further questions, please contact me at (202) 512-8678 or [email protected], or Harry Medina, Assistant Director, at (415) 904-2220 or [email protected]. Key contributors are acknowledged in appendix V. To identify and describe concerns regarding the current capital requirements for credit unions, we interviewed credit union industry groups, several credit union chief executive officers, credit union regulators, and two banking regulators. Additionally, through these interviews we gathered information on the issues and concerns associated with the potential use of secondary capital and risk-based capital by credit unions, including any documented proposals. We also conducted a literature search to identify studies on the potential use of secondary capital by credit unions and spoke with academics and other industry observers. To illustrate credit union prompt corrective action (PCA) capital levels over time, we conducted research on PCA regulations and reviewed the National Credit Union Administration’s (NCUA) Form 5300 (call report) database for 1994-2003 for federally insured, natural person credit unions. We reviewed NCUA-established procedures for verifying the accuracy of the Form 5300 database and found that the data constituting this database are verified on an annual basis, either during each credit union’s examination, or through off-site supervision. We determined that the data were sufficiently reliable for the purposes of this report. In addition, we reviewed capital requirements of banks and thrifts for comparison with credit union capital requirements. Credit unions have been subject to PCA programs for a short time, and the advantages and disadvantages of the current programs are not yet evident. As a result, we did not perform an evaluation or assessment of credit union PCA. We are aware that NCUA is constructing a more detailed risk-based capital proposal that incorporates both risk-based and leverage requirements; however, due to the lack of formalized details, we could not perform a meaningful assessment of the proposal. Given that none of the secondary capital or risk-based PCA proposals provided to us have garnered credit union industry consensus or contain sufficient details on which to base an assessment, we did not perform an evaluation of these proposals or an analysis of their potential benefits and implications. We conducted our work in Washington, D.C., from November 2003 through July 2004 in accordance with generally accepted government auditing standards. While NCUA has not finalized its risk-based PCA proposal for all credit unions, NCUA officials provided us items being used in the development of their risk-based PCA proposal: NCUA supports a statutorily mandated PCA system, with a minimum core leverage requirement (hard floor of 2 percent of total assets for critically undercapitalized); a statutory definition of net worth (with ability through regulation to reduce what qualifies as net worth, not increase it); and statutory thresholds based on risk assets defined by NCUA for the various net worth categories. NCUA also believes it should be provided with the authority to set the remaining elements of the risk-based PCA system by regulation. With the exception of being able to set by regulation a minimum level of net worth in relation to total assets (for example, 4 percent or 5 percent, tied to the credit union’s CAMEL rating) to be considered adequately capitalized, NCUA believes the current thresholds (but in relation to risk assets) are acceptable and best left established by statute. However, NCUA wants to keep the parity provision in the current statute, which provides the authority to change the thresholds by regulation, commensurate with any changes to the banks’ PCA thresholds. With regard to the net worth ratio numerator, NCUA also supports a statutory definition for net worth, but the current definition should be expanded beyond retained earnings under generally accepted accounting principles (GAAP). NCUA believes a better definition of net worth is equity of the credit union as determined under GAAP and as authorized by the NCUA board. NCUA believes this would provide the NCUA board with the authority through regulation to subtract from net worth balance sheet items (such as goodwill that have no value in the event of a payout) the NCUA board deems appropriate. Additionally, NCUA believes that this definition preserves the requirement to comply with GAAP and limits statutorily what can be included in net worth, while providing NCUA with the flexibility to reduce assets that count toward net worth for PCA purposes but that do not have value to the insurance fund. With regard to the net worth ratio denominator, NCUA advocates having the regulatory flexibility to set the risk weights for assets and adjust them, as it deems appropriate. In cases where there is a marginal drop in net worth below adequately capitalized, NCUA advocates having the regulatory flexibility to temporarily waive a credit union’s requirement to submit a net worth restoration plan if: (a) the credit union is CAMEL-rated 1 or 2 with a net worth ratio in the range of 5 percent to 7 percent, (b) the credit union’s book of business does not present a safety and soundness issue, and (c) the credit union’s assets are well managed. In addition, NCUA desires the regulatory flexibility to revisit the credit union after a specified time to determine if the temporary waiver is still appropriate and, if not, require the credit union to submit a net worth restoration plan. NCUA believes that this would reduce the burden placed on credit unions experiencing a small, temporary decline in the net worth ratio due to circumstances such as unsolicited, robust share growth that do not pose a safety and soundness concern. Further, NCUA believes such a provision would still provide NCUA with adequate authority to address any concerns on a case-by-case basis. In addition to those named above, Heather T. Dignan, Landis L. Lindsey, Kimberly A. Mcgatlin, Carl M. Ramirez, Barbara M. Roesmann, Paul G. Thompson, John H. Treanor, and Richard J. Vagnoni made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Since the passage of the Credit Union Membership Access Act of 1998 (CUMAA), many in the credit union industry have sought legislative changes to the net worth ratio central to prompt corrective action (PCA). The current debate centers on the issue of allowing federally insured credit unions to include additional forms of capital within the definition of net worth. In light of the issues surrounding the debate, GAO reviewed (1) the underlying concerns that have prompted the credit union industry's interest in making changes to the current capital requirements, (2) the issues associated with the potential use of secondary capital in all federally insured credit unions, and (3) the issues associated with the potential use of risk-based capital in all federally insured credit unions. The credit union industry's interest in making changes to the current capital requirements for credit unions appears to be driven by three primary concerns: (1) that restricting the definition of net worth solely to retained earnings could trigger PCA actions due to conditions beyond credit unions' control; (2) that PCA in its present form acts as a restraint on credit union growth; and (3) that PCA tripwires, or triggers for corrective action, are too high given the conservative risk profile of most credit unions. Despite these concerns, available indicators suggest that the credit union industry has not been overly constrained as a result of the implementation of PCA. As a group, credit unions have maintained capital levels well above the level needed to be considered well-capitalized and have grown at rates exceeding those of other depository institutions during the three calendar years that PCA has been in place for credit unions. Allowing credit unions to use secondary capital instruments to meet their regulatory net worth requirements would raise a number of issues and concerns, with perhaps the most important issue centering on who would purchase the secondary capital instruments. While outside investors would provide market discipline, this would raise concerns about the potential impact on the member-owned, cooperative structure of credit unions. Inside investors, however, could impose less discipline and raise systemic risk concerns if it resulted in a situation where weaker credit unions could bring down stronger credit unions due to secondary capital investments. Other issues relate to the specific form of the capital instruments for credit unions. The credit union industry itself appeared divided on the desirability or appropriate structure of secondary capital instruments. Conceptually, the use of risk-based capital to address the concerns some in the credit union industry expressed about PCA is less controversial. Though two risk-based capital proposals were put forward, neither has garnered industry consensus and both lacked details of key components upon which to base any assessment of their merits. Risk-based capital is intended to reflect the unique risk profile of individual financial institutions; however, there are other factors that can affect an institution's financial condition that are not easily quantified. In recognition of the limitations of risk-based capital systems, bank and thrift regulators use leverage and risk-based capital requirements in tandem. GAO is aware that NCUA is constructing a more detailed risk-based capital proposal that incorporates both risk-based and leverage requirements; however due to the lack of formalized details, GAO could not perform a meaningful assessment of the proposal.
FPS has not taken actions against some guard contractors that did not comply with the terms of the contracts. According to FPS guard contracts, a contractor has not complied with the terms of the contract if the contractor has a guard working without valid certifications or background suitability investigations, falsifies a guard’s training records, does not have a guard at a post, or has an unarmed guard working at a post at which the guard should be armed. If FPS determines that a contractor does not comply with these contract requirements, it can—among other things— assess a financial deduction for nonperformed work, elect not to exercise a contract option, or terminate the contract for default or cause. We reviewed the official contract files for the 7 contractors who, as we testified in July 2009, had guards performing on contracts with expired certification and training requirements to determine what action, if any, FPS had taken against these contractors for contract noncompliance. The 7 contractors we reviewed had been awarded several multiyear contracts totaling $406 million to provide guards at federal facilities in 13 states and Washington, D.C. According to the documentation in the contract files, FPS did not take any enforcement action against the 7 contractors for not complying with the terms of the contract, a finding consistent with DHS’s Inspector General’s 2009 report. In fact, FPS exercised the option to extend the contracts of these 7 contractors. FPS contracting officials told us that the contracting officer who is responsible for enforcing the terms of the contract considers the appropriate course of action among the available contractual remedies on a case-by-case basis. For example, the decision of whether to assess financial deductions is a subjective assessment in which the contracting officer and the contracting officer technical representative (COTR) take into account the value of the nonperformance and the seriousness of the deficiency, according to FPS contracting officials. FPS requires an annual performance evaluation of each contractor and at the conclusion of contracts exceeding $100,000, and requires that these evaluations and other performance-related documentation be included in the contract file. Contractor performance evaluations are one of the most important tools available for ensuring compliance with contract terms. Moreover, given that other federal agencies rely on many of the same contractors to provide security services, completing accurate evaluations of a contractor’s past performance is critical. However, we found that FPS’s contracting officers and COTRs did not always evaluate contractors’ performance as required, and some evaluations were incomplete and not consistent with contractors’ performance. We reviewed a random sample of 99 contract performance evaluations from calendar year 2006 through June 2009. These evaluations were for 38 contractors. Eighty-two of the 99 contract performance evaluations showed that FPS assessed the quality of services provided by the majority of its guard contractors as satisfactory, very good, or exceptional. For the remaining 17 evaluations, 11 showed that the contractor’s performance was marginal, 1 as unsatisfactory, and assessments for 5 contractors were not complete. According to applicable guidance, a contractor must meet contractual requirements to obtain a satisfactory evaluation and a contractor should receive an unsatisfactory evaluation if its performance does not meet most contract requirements and recovery in a timely manner is not likely. Nevertheless, we found instances where some contractors received a satisfactory or better rating although they had not met some of the terms of the contract. For example, contractors receiving satisfactory or better ratings included the 7 contractors discussed above that had guards with expired certification and training records working at federal facilities. In addition, some performance evaluations that we reviewed did not include a justification for the rating and there was no other supporting documentation in the official contract file to explain the rating. Moreover, there was no information in the contract file that indicated that the COTR had communicated any performance problems to the contracting officer. As of February 2010, FPS had yet to provide some of its guards with all of the required X-ray or magnetometer training. For example, we reported in July 2009 that in one region, FPS has not provided the required X-ray or magnetometer training to 1,500 guards since 2004. FPS officials subsequently told us that the contract for this region requires that only guards who are assigned to work on posts that contain screening equipment are required to have 8 hours of X-ray and magnetometer training. However, in response to our July 2009 testimony, FPS now requires all guards to receive 16 hours of X-ray and magnetometer training. As of February 2010, these 1,500 guards had not received the 16 hours of training but continued to work at federal facilities in this region. FPS plans to provide X-ray and magnetometer training to all guards by December 2010. X-ray and magnetometer training is important because the majority of the guards are primarily responsible for using this equipment to monitor and control access points at federal facilities. Controlling access to a facility helps ensure that only authorized personnel, vehicles, and materials are allowed to enter, move within, and leave the facility. FPS currently does not have a fully reliable system for monitoring and verifying whether its 15,000 guards have the certifications and training to stand post at federal facilities. FPS is developing a new system—Risk Assessment and Management Program (RAMP)—to help it monitor and verify the status of guard certifications and training. However, in our July 2009 report, we raised concerns about the accuracy and reliability of the information that will be entered into RAMP. Since that time, FPS has taken steps to review and update all guard training and certification records. For example, FPS is conducting an internal audit of its CERTS database. However, as of February 2010, the results of that audit showed that FPS was able to verify that about 8,600 of its 15,000 guards met the training and certification requirements. FPS is experiencing difficulty verifying the status of the remaining 6,400 guards. FPS has also received about 1,500 complaints from inspectors regarding a number of problems with RAMP. For example, some inspectors said it was difficult and sometimes impossible to find guard information in RAMP and to download guard inspection reports. Thus they were completing the inspections manually. Other inspectors have said it takes almost 2 hours to log on to RAMP. Consequently, on March 18, 2010, FPS suspended the use of RAMP until it resolves these issues. FPS is currently working on resolving issues with RAMP. Once guards are deployed to a federal facility, guards are not always complying with assigned responsibilities (post orders). As we testified in July 2009, we identified substantial security vulnerabilities related to FPS’s guard program. FPS also continues to find instances where guards are not complying with post orders. For example, 2 days after our July 2009 hearing, a guard fired his firearm in a restroom in a level IV facility while practicing drawing his weapon. In addition, FPS’s own penetration testing—similar to the covert testing we conducted in May 2009—showed that guards continued to experience problems with complying with post orders. Since July 2009, FPS conducted 53 similar penetration tests at federal facilities in the 6 regions we visited, and in over 66 percent of these tests, guards allowed prohibited items into federal facilities. We accompanied FPS on two penetration tests in August and November 2009, and guards at these level IV facilities failed to identify a fake bomb, gun, and knife during X-ray and magnetometer screening at access control points. During the first test we observed in August 2009, FPS agents placed a bag containing a fake gun and knife on the X-ray machine belt. The guard failed to identify the gun and knife on the X-ray screen, and the undercover FPS official was able to retrieve his bag and proceed to the check-in desk without incident. During a second test, a knife was hidden on an FPS officer. During the test, the magnetometer detected the knife, as did the hand wand, but the guard failed to locate the knife and the FPS officer was able to gain access to the facility. According to the FPS officer, the guards who failed the test had not been provided the required X-ray and magnetometer training. Upon further investigation, only 2 of the 11 guards at the facility had the required X-ray and magnetometer training. In response to the results of this test, FPS debriefed the contractor and moved one of the guard posts to improve access control. In November 2009, we accompanied FPS on another test of security countermeasures at a different level IV facility. As in the previous test, an FPS agent placed a bag containing a fake bomb on the X-ray machine belt. The guard operating the X-ray machine did not identify the fake bomb and the inspector was allowed to enter the facility with it. In a second test, an FPS inspector placed a bag containing a fake gun on the X-ray belt. The guard identified the gun and the FPS inspector was detained. However, the FPS inspector was told to stand in a corner and was not handcuffed or searched as required. In addition, while all the guards were focusing on the individual with the fake gun, a second FPS inspector walked through the security checkpoint with two knives without being screened. In response to the results of this test, FPS suspended 2 guards and provided additional training to 2 guards. In response to our July 2009 testimony, FPS has taken a number of actions that, once fully implemented, could help address the challenges the agency faces in managing its contract guard program. For example, FPS Increased guard inspections at facilities in some metropolitan areas. FPS has increased the number of guard inspections to two a week at federal facilities in some metropolitan areas. Prior to this new requirement, FPS did not have a national requirement for guard inspections, and each region we visited had requirements that ranged from no inspection requirements to each inspector having to conduct five inspections per month. Increased X-ray and magnetometer training requirements for inspectors and guards. FPS has increased its X-ray and magnetometer training for inspectors and guards from 8 hours to 16 hours. In July 2009, FPS also required each guard to watch a government-provided digital video disc (DVD) on bomb component detection by August 20, 2009. According to FPS, as of January 2010, approximately 78 percent, or 11,711 of the 15,000 guards had been certified as having watched the DVD. Implementing a new system to monitor guard training and certifications. As mentioned earlier, FPS is also implementing RAMP. According to FPS, RAMP will provide it with the capability to monitor and track guard training and certifications and enhance its ability to conduct and track guard inspections. RAMP is also designed to be a central database for capturing and managing facility security information, including the risks posed to federal facilities and the countermeasures that are in place to mitigate risk. It is also expected to enable FPS to manage guard certifications and to conduct and track guard inspections electronically as opposed to manually. However, as mentioned earlier, as of March 18, 2010, FPS suspended the use of RAMP until it can resolve existing issues. Despite FPS’s recent actions, it continues to face challenges in ensuring that its $659 million guard program is effective in protecting federal facilities. While the changes FPS has made to its X-ray and magnetometer training will help to address some of the problems we found, there are some weaknesses in the guard training. For example, many of the 15,000 guards will not be fully trained until the end of 2010. In addition, one contractor told us that one of the weaknesses associated with FPS’s guard training program is that it focuses primarily on prevention and detection but does not adequately address challenge and response. This contractor has developed specific scenario training and provides its guards on other contracts with an additional 12 hours of training on scenario-based examples, such as how to control a suicide bomber or active shooter situation, evacuation, and shelter in place. The contractor, who has multiple contracts with government agencies, does not provide this scenario-based training to its guards on FPS contracts because FPS does not require it. We also found that some guards were still not provided building-specific training, such as what actions to take during a building evacuation or a building emergency. According to guards we spoke to in one region, guards receive very little training on building emergency procedures during basic training or the refresher training. These guards also said that the only time they receive building emergency training is once they are on post. Consequently, some guards do not know how to operate basic building equipment, such as the locks or the building ventilation system, which is important in a building evacuation or building emergency. FPS’s decision to increase guard inspections at federal facilities in metropolitan areas is a step in the right direction. However, it does not address issues with guard inspections at federal facilities outside metropolitan areas, which are equally vulnerable. Thus, without routine inspections of guards at these facilities, FPS has no assurance that guards are complying with their post orders. We believe that FPS continues to struggle with managing its contract guard program in part because, although it has used guards to supplement the agency’s workforce since the 1995 bombing of the Alfred P. Murrah Federal Building, it has not undertaken a comprehensive review of its use of guards to protect federal facilities to determine whether other options and approaches would be more cost-beneficial. FPS also has not acted diligently in ensuring that its guard contractors meet the terms of the contract and taking enforcement action when noncompliance occurs. We also believe that completing the required contract performance evaluations for its contractors and maintaining contract files will put FPS in a better position to determine whether it should continue to exercise contract options with some contractors. Moreover, maintaining accurate and reliable data on whether the 15,000 guards deployed at federal facilities have met the training and certification requirements is important for a number of reasons. First, without accurate and reliable data, FPS cannot consistently ensure compliance with contract requirements and lacks information critical for effective oversight of its guard program. Second, given that other federal agencies rely on many of the same contractors to provide security services, completing accurate evaluations of a contractor’s past performance is critical to future contract awards. Thus, in our report we recommend that the Secretary of Homeland Security direct the Under Secretary of NPPD and the Director of FPS to take the following eight actions: identify other approaches and options that would be most beneficial and financially feasible for protecting federal buildings; rigorously and consistently monitor guard contractors’ and guards’ performance and step up enforcement against contractors that are not complying with the terms of the contract; complete all contract performance evaluations in accordance with FPS and Federal Acquisition Regulation requirements; issue a standardized record-keeping format to ensure that contract files have required documentation; develop a mechanism to routinely monitor guards at federal facilities provide building-specific and scenario-based training and guidance to its develop and implement a management tool for ensuring that reliable, comprehensive data on the contract guard program are available on a real- time basis; and verify the accuracy of all guard certification and training data before entering them into RAMP, and periodically test the accuracy and reliability of RAMP data to ensure that FPS management has the information needed to effectively oversee its guard program. DHS concurred with seven of our eight recommendations. Regarding our recommendation to issue a standardized record-keeping format to ensure that contract files have required documentation, DHS concurred that contract files must have required documentation but did not concur that a new record-keeping format should be issued. DHS commented that written procedures already exist and are required for use by all DHS’s Office of Procurement Operations staff and the components it serves, including NPPD. We believe that the policies referenced by DHS are a step in the right direction in ensuring that contract files have required documentation; however, although these policies exist, we found a lack of standardization and consistency in the contract files we reviewed among the three Consolidated Contract Groups. Overall, we are also concerned about some of the steps FPS plans to take to address our recommendations. For example, FPS commented that to provide routine oversight of guards in remote regions it will use an employee of a tenant agency (referred to as an Agency Technical Representative) who has authority to act as a representative of a COTR for day-to-day monitoring of contract guards. However, several FPS regional officials told us that the Agency Technical Representatives were not fully trained and did not have an understanding of the guards’ roles and responsibilities. These officials also said that the program may not be appropriate for all federal facilities. We believe that if FPS plans to use Agency Tenant Representatives to oversee guards, it is important that the agency ensure that the representatives are knowledgeable of the guard’s responsibilities and are trained on how and when to conduct guard inspections as well as how to evacuate facilities during an emergency. Furthermore, while we support FPS’s overall plans to better manage its contract guard program, we believe it is also important for FPS to have appropriate performance metrics to evaluate whether its planned actions are fully implemented and are effective in addressing the challenges it faces managing its contract guard program. Mr. Chairman, this concludes our testimony. We are pleased to answer any questions you might have. For further information on this testimony, please contact Mark L. Goldstein, (202) 512-2834 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Tammy Conquest, Assistant Director; Tida Barakat; and Jonathan Carver. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
To accomplish its mission of protecting about 9,000 federal facilities, the Federal Protective Service (FPS) currently has a budget of about $1 billion, about 1,225 full-time employees, and about 15,000 contract security guards. FPS obligated $659 million for guard services in fiscal year 2009. This testimony is based on our report issued on April 13, 2010, and discusses challenges FPS continues to face in (1) managing its guard contractors and (2) overseeing guards deployed at federal facilities, and (3) the actions FPS has taken to address these challenges. To address these objectives, GAO conducted site visits at 6 of FPS's 11 regions; interviewed FPS officials, guards, and contractors, and analyzed FPS's contract files. GAO also reviewed new contract guard program guidance issued since our July 2009 report and observed guard inspections and penetration testing done by FPS. FPS faces a number of challenges in managing its guard contractors that hamper its ability to protect federal facilities. FPS requires contractors to provide guards who have met training and certification requirements. FPS's guard contract also states that a contractor who does not comply with the contract is subject to enforcement action. GAO reviewed the official contract files for the seven contractors who, as GAO testified in July 2009, had guards performing on contracts with expired certification and training requirements to determine what action, if any, FPS had taken against these contractors for contract noncompliance. These contractors had been awarded several multiyear contracts totaling $406 million to provide guards at federal facilities in 13 states and Washington, D.C. FPS did not take any enforcement actions against these seven contractors for noncompliance. In fact, FPS exercised the option to extend their contracts. FPS also did not comply with its requirement that a performance evaluation of each contractor be completed annually and that these evaluations and other performance-related data be included in the contract file. FPS plans to provide additional training and hold staff responsible for completing these evaluations more accountable. FPS also faces challenges in ensuring that many of the 15,000 guards have the required training and certification to be deployed at a federal facility. In July 2009, GAO reported that since 2004, FPS had not provided X-ray and magnetometer training to about 1,500 guards in 1 region. As of January 2010, these guards had not received this training and continued to work at federal facilities in this region. X-ray and magnetometer training is important because guards control access points at federal facilities. FPS currently does not have a fully reliable system for monitoring and verifying whether its 15,000 guards have the certifications and training to stand post at federal facilities. FPS developed a new Risk Assessment and Program Management system to help monitor and track guard certifications and training. However, FPS is experiencing difficulties with this system and has suspended its use. In addition, once guards are deployed to a federal facility, they are not always complying with assigned responsibilities (post orders). Since July 2009, FPS has conducted 53 penetration tests in the 6 regions we visited, and in over half of these tests some guards did not identify prohibited items, such as guns and knives. In response to GAO's July 2009 testimony, FPS has taken a number of actions that, once fully implemented, could help address challenges it faces in managing its contract guard program. For example, FPS has increased the number of guard inspections at federal facilities in some metropolitan areas. FPS also revised its X-ray and magnetometer training; however, all guards will not be fully trained until the end of 2010, although they are deployed at federal facilities. Despite FPS's recent actions, it continues to face challenges in ensuring that its $659 million guard program is effective in protecting federal facilities. Thus, among other things, FPS needs to reassess how it protects federal facilities and rigorously enforce the terms of the contracts.
Congress created SEC in 1934 to administer and enforce the federal securities laws to protect investors and maintain the integrity of the securities markets. SEC’s mission is to (1) promote full and fair disclosure; (2) prevent and suppress fraud; (3) supervise and regulate the securities markets; and (4) regulate and oversee investment companies, investment advisers, and public utility holding companies. SEC works to fulfill this mission through various divisions and offices, among them the Office of the Executive Director, which formulates SEC’s budget and authorization strategies. As a federal agency, SEC is subject to congressional oversight. Congress oversees federal agencies primarily through two distinct but complementary processes—authorizations and appropriations, which are implemented through authorizing and appropriating committees in the U.S. Senate and House of Representatives. The authorizing committees are responsible for creating a program, mandating the terms and conditions under which it operates, and establishing the basis for congressional oversight and control. SEC’s authorizing committees are the Senate Committee on Banking, Housing, and Urban Affairs and the House Committee on Financial Services. The appropriations committees and subcommittees are charged with assessing the need for, amount of, and period of availability of appropriations for agencies and programs under their jurisdiction. SEC’s annual appropriations are under the jurisdiction of the Subcommittee on Commerce, Justice, State, and the Judiciary, U.S. Senate Committee on Appropriations; and the Subcommittee on Commerce, Justice, State, the Judiciary, and Related Agencies, House Committee on Appropriations. To fund its operations, the federal securities laws direct SEC to collect fees. SEC generally collects three types of fees: Securities registration fees, which are required to be collected under Section 6(b) of the Securities Act of 1933 (the Securities Act), are paid when companies register with SEC new stocks and bonds for sale to investors. In 2001, SEC collected $987 million in Section 6(b) fees; Securities transaction fees, which are required to be collected under Section 31 of the Securities Exchange Act of 1934 (the Exchange Act) are paid by national securities exchanges and national securities associations when registered securities and security futures are sold on or off exchanges through any member of such an association. In 2001, SEC collected $1.04 billion in Section 31 fees; and Fees on proxy solicitations for mergers, consolidations, acquisitions, or sales of a company’s assets, which are required to be collected under Section 14(g) of the Exchange Act, are paid by the person filing proxy solicitation materials for such transactions. Fees on the purchase of securities by an issuer of its issued securities are paid by the issuer under Section 13(e) of the Exchange Act. In 2001, SEC collected $33 million in filing fees. SEC fees are deposited in a special SEC appropriations account to be used as offsetting collections. Although the fees were enacted to fund SEC operations, figure 1 illustrates how the amount of fees collected in recent years has far exceeded SEC’s appropriated budget. For example, in 2000, SEC collected $2.27 billion in fees, while the agency’s 2000 budget was $368 million. Similarly, in 2001, SEC collected about $2.1 billion, while its 2001 budget was $423 million. Projected fee collections in excess of SEC’s appropriations are available to SEC’s appropriators to fund other priorities within the CJS appropriations bill. Congress first addressed the issue of excess SEC fees in 1996 through the National Securities Markets Improvement Act of 1996, which reduced registration and transaction fees. However, SEC’s fee collection grew even higher because of subsequent increases in stock prices and stock trading volume. Viewing these excess fees as an unwarranted tax on investment and capital formation, Congress enacted the Investor and Capital Markets Fee Relief Act on January 16, 2002. The Act substantially reduces the fees collected by SEC and designates all such fees as offsetting collections available to fund the operations of the agency, to the extent provided by Congress. Prior to the enactment of this Act, most of the fees collected were deposited in the U.S. Treasury general fund as revenue. The Act also reduces the basic rates for transaction fees, registration fees, stock repurchase fees, and merger and acquisition fees, and it eliminates certain other filing fees. The Act includes “target offsetting collection amounts” for both transaction and registration fees for fiscal years 2002 through 2011. SEC would be required to adjust the basic rates for those fees to make it “reasonably likely” that collections would equal the target amounts. The Act also grants SEC the authority to pay its employees’ salaries and benefits at levels commensurate with those paid by the federal banking regulators (pay parity). For 2003, SEC currently estimates the additional cost of implementing pay parity to be $76 million. SEC anticipates that the funding to accommodate this increase will be provided exclusively out of the amount of fees SEC is scheduled to collect annually under the Act and appropriated by Congress. Although SEC fee collections as estimated for 2003 in the Act total $1.33 billion, the President’s 2003 Budget request included a budget estimate of $466.9 million for SEC. This amount represents a $29 million, or 6.6 percent, increase over SEC’s 2002 budget of $437.9 million but does not include any funding for a pay parity program in 2003. To date, Congress has not enacted SEC’s 2003 appropriation. As reported in our SEC operations report, SEC is operating in an increasingly dynamic regulatory environment. Over the past decade, the securities markets have undergone tremendous growth and innovation as technological advances have increased the complexity of the markets and the range of products afforded to the public. Larger, more active, and more complex markets have produced more market participants, registrants, filings, examinations and inspections, legal interpretations, complaints, and opportunities for fraudulent activities. In our SEC operations report, SEC and industry officials agreed that SEC’s ability to fulfill its mission in such a dynamic environment has become increasingly strained as SEC’s growing workload has substantially outpaced increases in its staffing levels. Specifically, over the past decade, we found that staffing within SEC’s various oversight areas has grown between 9 and 166 percent, while workload measures in those areas have grown from 60 to 264 percent. Moreover, following the sudden and highly publicized collapse of Enron Corporation and other corporate failures, SEC has been under increasing pressure to ensure that it is equipped to adequately oversee the securities markets and to ensure that investors receive accurate and meaningful financial disclosure, an important part of SEC’s mission to protect investors. In addition, legislative changes such as the Gramm-Leach-Bliley Act of 1999, the Commodity Futures Modernization Act of 2000, and the USA Patriot Act of 2001 placed added demands on SEC’s limited resources. All of these changes have significant repercussions and pose challenges for SEC’s oversight role. In light of these challenges and prompted by concerns about SEC’s ability to carry out its mission, legislators introduced H.R. 3764 and S. 2673, both of which would authorize appropriations for SEC of $776 million, and H.R. 3818, which would authorize appropriations for SEC of $876 million. Both House bills would designate more than half of these amounts for the Division of Corporate Finance and the Division of Enforcement to increase enforcement in financial reporting cases and other oversight initiatives. The Senate bill designated specific amounts for pay parity, information technology, and additional staff for oversight of audit services. Federal financial regulators are largely self-supporting through fee collections, assessments, or other funding sources, but not all of these self- funding options meet the Act’s definition of self-funding. The variation among federal agencies is attributable to how and when Congress makes the funds available to the agency and how much flexibility Congress gives the agency in using the fees or other funding sources it collects. At some agencies, Congress limits the amount of assessments collected or available for agency use. Such limitations are generally established by provisions in annual appropriations acts. For example, funding for SEC in 2002 was appropriated from fees collected in 2002 and prior years. In SEC’s case, although all the offsetting collections by definition are dedicated to SEC, Congress limits how much fee revenue is available. For example, in 2001, SEC collected about $2.1 billion in fees; however, Congress appropriated about $423 million for SEC’s operations. Therefore, almost $1.7 billion was available to the CJS subcommittees to offset spending for other agencies and programs under CJS jurisdiction. There are other regulatory agencies, such as FCA and OFHEO, which also operate at this more congressionally controlled end of the self-funding range (see fig. 2 for a description of these two agencies’ missions as well as the missions of other financial services regulators). Although FCA and OFHEO fund their operations solely by assessments from their regulated entities, these agencies remain subject to the appropriations process. That is, Congress establishes annual limits through the appropriations process by approving the amount of assessments these agencies can collect. For example, in 2001, Congress appropriated $40 million to FCA, which authorized FCA to collect assessments up to this amount as offsetting collections for 2001. Moreover, Congress limits the amount of assessments that FCA can obligate for administrative expenses. For example, in 2001, FCA’s obligation for administrative expenses was limited to about $38 million. Congress also establishes OFHEO’s budget in a similar manner. On the other hand, Congress has granted more self-controlled funding structures to other agencies. Some of these agencies have permanent indefinite appropriations, which means that these agencies can use whatever amount of funds are collected without any further legislative action. Agencies at this less congressionally controlled end of the range include the federal banking agencies (that is, FRS, OCC, OTS, FDIC, and NCUA). Unlike SEC, which is generally funded by transaction fees and registration-based fees and subject to annual appropriations, these agencies are supported almost entirely through examination or assessment fees on their members, deposit insurance premiums, or interest on asset holdings, and are not included in the annual congressional appropriations process. The bank regulators’ self-funding structure most closely fits the Act’s definition of self-funding. According to banking agency officials, they, not Congress, control their agencies’ budget growth and direct how their agencies spend their funds. Although SEC continues to be subject to annual appropriations, the Act moved SEC closer to having the same authority as the banking agencies by allowing SEC to establish the compensation and benefit levels of its employees. Moving SEC to a more self-controlled funding structure has two important implications for SEC’s operations. First, SEC would have more control over its own budget and funding level, which some SEC and industry officials believe may better enable SEC to take steps to address its increasing workload and some of its human capital challenges, such as recruiting and retaining quality staff. However, others knowledgeable about SEC’s operations questioned whether more budget flexibility is the best means to address SEC’s recruiting and retention issues. Second, SEC would have an added responsibility in managing a more self-controlled funding structure. Self-funded agencies require sound fiscal control mechanisms to compensate for the removal of the scrutiny provided by both OMB and the appropriators, as part of the federal budget process. In addition, self-funded agencies require sound fiscal discipline to ensure revenue streams. In previous reports, we found weaknesses in SEC’s existing budget and planning processes. Some SEC officials told us that a more self-controlled funding approach might better enable SEC to address its increasing workload and ongoing human capital challenges, most notably high staff turnover and numerous vacancies. As mentioned previously, we reported in our SEC operations report that both SEC and industry officials agreed that current levels of human capital and budgetary resources have limited SEC’s ability to address many current and evolving market issues at a time when the collapse of Enron and other corporate failures have increased SEC’s workload and generated debates on reforms, which may result in increased responsibilities for SEC. However, others knowledgeable about the industry countered that while SEC may need more resources, there are more efficient ways to affect a change in SEC’s budget than conversion to a self-controlled funding basis. For example, within the existing structure, SEC could justify budget increases to its authorization and appropriation committees beyond the amount included in the President’s Budget. Based on their experiences with self-funding, officials from the bank regulatory agencies we interviewed said that self-funding provided their agencies with more autonomy in formulating their budgets. They also said that having more control enabled them to respond more quickly to program needs in changing market conditions because they could reallocate or increase funding without having to wait for legislative action. SEC officials said they believed they would realize similar benefits in the human capital area because they would have greater control over their funding and would be able to respond quickly to changes in the market. For example, the sudden collapse of Enron Corporation and other corporate failures have stimulated an intense debate on the need for broad-based reform in such areas as financial reporting and accounting standards, oversight of the accounting profession, and corporate governance. In response to these challenges and proposals for regulatory changes, SEC officials requested approval for 100 additional staff positions dedicated to reviewing corporate filings, enforcing securities laws, and providing accounting guidance. However, under the existing structure, Congress and the executive branch must approve any such increases in SEC’s staffing allocation. Although there is general agreement on the need for these increased resources, SEC’s request to increase staffing in these areas is included in a supplemental appropriations bill that was considered in April 2002 but is not yet enacted, as Congress is considering issues unrelated to SEC’s funding needs. A more self-controlled funding structure would have allowed SEC to immediately implement its plan without the need for legislative action. Another SEC official said that a more self-controlled funding structure would enable SEC to allocate resources to fund pay parity, allowing SEC to offer compensation packages similar to those offered by the bank regulators and putting SEC in a better position to attract and retain quality staff. SEC believes this could also help SEC stem turnover among its attorneys, accountants, and examiners—staff necessary to carry out SEC’s mission. Although the rate had decreased from 15 percent in 2000 to 9 percent in 2001, turnover at SEC was still higher than the turnover rate governmentwide in 2001. As we reported previously, most SEC employees who responded to our survey said that compensation was their primary reason for leaving or thinking of leaving SEC. Although SEC officials acknowledged that turnover will always be an issue, they said that pay parity should enable SEC to lengthen the average tenure of attorneys and examiners. We previously reported that in 1999 the average tenure for attorneys was 2.5 years and for examiners 1.9 years. According to SEC officials, new employees need at least 2 years on the job to gain the knowledge and experience necessary to significantly contribute to SEC’s mission. Another implication of moving SEC to a more self-controlled funding structure is that it would require SEC to establish a system of internal controls to ensure fiscal discipline. Under SEC’s current funding structure, OMB and the appropriations process provide fiscal discipline for the agency. For example, SEC’s current annual budget cycle as illustrated in figure 3 begins with the preparation of an agencywide estimate that is based on the previous budget year’s appropriation. SEC then develops a conforming budget estimate based on OMB’s budget guidance, including a specified budget amount that OMB provides to SEC. After receiving OMB’s approval, SEC’s budget request is included in the President’s Budget that is submitted to Congress. Under this structure, SEC’s annual budget has been based on the previous year’s appropriations rather than on what may be actually needed to fulfill its mission. While practical, as reported in our SEC operations report, we found that this type of reactive approach could diminish SEC’s effectiveness, resulting in less effective enforcement and oversight. If moved to a self-controlled funding structure, not only would SEC have to improve its budget planning process by reviewing its staffing and resource needs independent of the budget process but the fiscal restraint provided within the federal budget process would be lost. Therefore, SEC would need to create its own internal control mechanisms and accountability structure to ensure fiscal discipline and budgetary restraint. Bank regulatory officials said that to compensate for not being subject to appropriations oversight, self-funding requires discipline in both planning and budget processes. For example, one bank regulatory official said that his agency has a budget process that mirrors the federal budget planning process: the head of the agency reviews the budget estimates for each division and holds “hearings” in which each division must justify its budget estimate, similar to OMB’s budget process. Officials from NCUA, OCC, and OTS also said that their agencies routinely share their budgets with OMB as a courtesy. In addition to their own internal processes, bank regulators also said that they experience some amount of regulatory competition and scrutiny from industry groups and regulated entities. These pressures provide incentives to the regulators to keep their operations efficient. Four regulators oversee the banking industry: three charter commercial banks, and one charters thrift institutions. Moreover, commercial banks have the option of changing their national charter to a state charter, and thrifts can opt to switch from their thrift charter to one of the commercial bank charters. Unlike the bank regulators, SEC is the sole federal regulator overseeing the U.S. securities markets, and its regulated entities generally have no other regulatory options if they want to operate in the securities markets. However, this structure does afford SEC a certain amount of independence from its regulated entities, an independence that may not be afforded to other agencies facing regulatory competition. Additionally, SEC’s fee payers may be less likely to scrutinize SEC’s budget because unlike the banking industry, where the burden of paying assessment fees is limited to the regulated entities, the securities industry distributes the responsibility for paying SEC’s transaction fees among all market participants. Therefore, in the absence of strong external pressures, a rigorous internal budget process and a related set of controls would be critical for SEC if it were to operate on a self-controlled funding basis. Fiscal discipline is also important for self-controlled funding agencies because these agencies have no guarantee that they will be included in the appropriations process if they experience budget shortfalls. Instead, short of raising fees or assessments, some of these agencies, such as OCC and OTS, rely on backup sources of funding, such as reserves established from excess funds from previous years. These two agencies have established reserves to protect them during periods of revenue shortages. However, both agencies also have established internal policies that govern the appropriate use of these reserves. OCC and OTS officials said that their agencies now are less willing to use reserves during periods of revenue shortages. For example, the heads of these agencies have chosen to downsize and cut their expenses to maintain their budgets rather than use their reserves. Unlike the banking regulators, who have more control over the amount collected through assessments, SEC relies on transaction fees, which are less predictable and more difficult to estimate. Finally, moving SEC to a more self-controlled funding basis would also increase the need for strategic planning, which also should be linked to the budget process. Based on our review of SEC’s strategic plan in our SEC operations report, we found that SEC had not engaged in a comprehensive strategic planning process. We found that SEC had not systematically utilized its strategic planning process to ensure (1) that resources are best used to accomplish its basic statutorily mandated duties, and (2) that human capital planning has identified the resources necessary to fulfill the full scope of its mission. Moreover, SEC’s annual plans lacked the detailed analysis and information needed to make informed workforce decisions. We found that additional information on (1) any excess or gaps in needed competencies within the agency’s various divisions and offices and (2) the relationship between budget requests for full-time equivalent staff years and SEC’s ability to meet individual strategic goals could make SEC’s budget process more meaningful. Introducing a meaningful strategic planning process at SEC could also make budget planning more proactive, rather than reactive, as is currently the case. SEC has begun to take steps to address these issues. In March 2002, SEC hired a consulting firm to work with an internal taskforce to perform an in-depth review of SEC’s operations, effectiveness, and resource needs. However, SEC officials stated that because the 2003 budget has already been finalized under the current budget process, they were concerned that even if substantive improvements were recommended by the internal taskforce, the earliest that SEC would be able to effectively react to these changes would be the 2004 budget cycle. A shift in budgetary control from Congress and OMB to make SEC self- funded as defined in the Act poses various implications for oversight of SEC. It could reduce the amount of direct control over SEC’s budget and operations, because the appropriators and OMB would no longer be involved in oversight. By shifting more control to SEC and its authorizing committees, CJS subcommittees would also lose the benefit of having SEC’s fees available to offset spending for other discretionary spending purposes. However, Congress and OMB could compensate for this reduction in direct control by placing other spending limits on SEC. If Congress granted SEC permanent authority to collect fees without further congressional action and authorized it to use whatever fees are collected—permanent indefinite appropriations—and posed no limitations, this shift to self-funding as defined by the Act would affect congressional oversight to a greater degree than other alternatives we considered. Under permanent indefinite appropriations, the appropriations committees generally would not be involved in overseeing SEC’s appropriations. However, the authorizing committees and other oversight committees, such as the Senate Committee on Governmental Affairs and House Committee on Government Reform, could continue to oversee SEC, since congressional oversight is not limited to budgetary authority and remains an important tool for evaluating program administration and performance; making sure programs conform to congressional intent; ferreting out waste, fraud, and abuse; seeing whether programs may have outlived their usefulness; compelling an explanation or justification of policy; and ensuring that programs and agencies are administered in a cost-effective and efficient manner. Shifting SEC’s budget structure to a more self-controlled model would also diminish the role of OMB, which establishes the framework by which agencies formulate their budget estimates and is responsible for ensuring that agency budget requests are consistent with specific budgetary guidelines and spending ceilings. Currently, SEC prepares its budget request based on guidance from OMB and submits this estimate to OMB for review and approval (see fig. 3). A budget hearing is subsequently held and, during this hearing, any policy changes or shifts in the SEC Chairman’s priorities are discussed. OMB then determines whether the proposals are consistent with the President’s policy goals. This part of the process is significant from an oversight perspective, because OMB can increase or decrease SEC’s budget request based on those evaluations. For example, OMB increased SEC’s budget request by $8.6 million in 1995. According to SEC officials, OMB increased SEC’s budget proposals to allow it to hire additional examiners. Most recently, OMB reduced SEC’s 2003 budget request by about $95.5 million, most of which could have been used to fund pay parity. According to OMB officials, they would prefer that SEC not implement pay parity immediately but instead come up with a mechanism to fund it over time. As table 1 illustrates, SEC has limited influence over appropriations levels. From 1992 to 2001, OMB reduced SEC’s budget in all but 3 years. Likewise, the House of Representatives has voted to decrease SEC’s funding as presented in the President’s Budget every year. Conversely, the Senate has voted to restore most of the President’s Budget each year. Generally, the result has been appropriations lower than SEC’s budget request. In addition to annual appropriations, SEC has received supplemental appropriations or additional funding from other sources. For example, since 1994 SEC has received a supplemental appropriation or used its unobligated balances from prior years to increase its total funding level above its appropriation. If Congress wanted to give SEC greater control over its budget but still maintain some degree of control over SEC’s funding level, it could place a variety of limitations on SEC’s offsetting collections. These limitations include designating fees collected for SEC’s use, but establishing limits on their use through annual appropriations; specifying the amount of fees to be collected and available for use in appropriations. If SEC were to collect more than that amount, Congress could specify that such amounts be designated to SEC, but not be made available without (further) congressional action; controlling the size of SEC or a particular program within SEC by limiting its obligations for specific purposes or to specific amounts; and limiting the purposes for which fees can be used. For example, Congress limits the amount of FCA’s assessments that can be obligated for administrative expenses. The OMB also has various ways of enforcing accountability that can constrain a program’s operations. For example, through the apportionment process OMB can control the rate of obligations by controlling the rate at which budget authority is made available during the fiscal year. Finally, a department or agency independently may place administrative limits on funding, such as restricting the amount that can be used for travel or not allowing funds to be shifted between items of expense. For example, an agency might prohibit a program manager from purchasing a computer using funds allocated, but no longer needed, for salaries and benefits. Another implication of self-controlled funding is that offsetting collections would no longer be available to offset funding for other discretionary spending purposes. As discussed earlier in this report, fees in excess of SEC’s budget are used by the appropriators to offset funding for other priorities in the CJS appropriations bill. Self-controlled funding would allow SEC to dedicate the fees that it collects to fund its operations without further legislative action. Therefore, SEC’s fees would not be available to offset spending for other federal programs. However, regardless of whether the SEC funding structure changes, CJS will have less funding available for discretionary spending because SEC’s fees will decrease as mandated in the Act. The decision on whether to change SEC’s self-funding status and to what degree is a policy decision that resides with Congress. In deciding whether to move SEC to a more self-controlled funding structure, Congress will have to weigh the increase in flexibility afforded SEC against the loss in oversight provided by the appropriators and OMB. The increased funding flexibility would likely allow SEC to more readily fund certain budget priorities, such as pay parity, and to more quickly respond to the ever- changing securities markets. On the other hand, Congress and OMB would lose the ability to directly affect the budget and direction of the agency. In return for this added flexibility and control, SEC would have to develop its own system of fiscal controls and an accountability structure to address the loss of rigor and discipline provided by the federal budget and appropriations process. The Chairman, SEC, provided written comments on a draft of this report that are reprinted in appendix II. SEC agreed that the report correctly identified the principal consequences of moving SEC to a self-funded structure. However, SEC raised several concerns with our observations about the issues that SEC would have to address were it to be given self- funding authority. Specifically, SEC commented on our observations in the report that SEC would need to (1) adequately manage its annual fee collections if it were to be moved outside of the traditional budget process and (2) improve its budget planning process if it were given self-funding authority. SEC also stated that our discussion of the fiscal discipline that would be required if SEC were given self-funding authority would benefit from an analysis of SEC’s experience with unobligated balances derived primarily from fees collected in excess of amounts used to offset its appropriation. On the first issue, regarding the need for SEC to adequately manage fee collections, SEC stated that the report could benefit from a more robust discussion of SEC’s responsibilities under the recently enacted Investor and Capital Markets Fee Relief Act. Among other things, this Act gives SEC the responsibility for adjusting fee rates on an annual and semi- annual basis, if necessary, to meet statutory “target collection amounts.” SEC stated that it had developed an adjustment mechanism to perform this function that has provided SEC with useful experience that would be beneficial if it were to move to a self-funding structure. In our report we discussed the importance of fiscal discipline for self-controlled funding agencies, because these agencies are not guaranteed to be included in the appropriations process if they experience budget shortfalls. The report also recognized that the Act, enacted in January 2002, changed how SEC’s fees are collected and statutorily established target offsetting collection amounts. We did not question SEC’s ability to adequately manage its fee collections, but rather we observed that SEC as is required by statute relies on transaction-based fees, which we continue to believe generate revenues that are less predictable and more difficult to estimate than the assessments used by bank regulators to fund their operations. The second issue SEC raised was our observation that SEC’s current budget planning process would have to be improved if it were converted to a self-funded basis, and it noted that “SEC’s ability to be proactive with respect to budget planning is constrained by the requirements of OMB Circular A-11, and will continue to be limited…in the absence of self-funding authority.” As stated in the report, SEC’s annual budget is based on the past year’s appropriations rather than on what is actually needed to fulfill its mission. Although this approach may be practical in the current context, we continue to believe that it would be useful for SEC to determine its staffing and resource needs to fulfill its mission regardless of its funding status. Nevertheless, we are encouraged by SEC’s expressed commitment to improving its budget and strategic planning processes and the preliminary steps that are currently under way. Finally, SEC expressed concern about the report’s discussion of SEC’s need for fiscal discipline, and stated that the report “would benefit from an analysis of the SEC’s experience with unobligated balances,” which according to SEC are “derived primarily from fees collected in excess of amounts used to offset appropriation.” As illustrated in table 1 of the report, SEC has used these balances in several years during the period covered. However, we are not persuaded that additional analysis of SEC’s use of these balances would be beneficial to the report, because SEC does not have total control over the use of these unobligated funds. That is, in most cases the fiscal restraint provided by the current budgetary process is still a factor, because SEC is still subject to OMB and congressional review of its reprogramming proposals. In the absence of external fiscal discipline, we continue to believe that self-funded agencies have to establish systems to instill the fiscal restraint that would have been provided by the budget and appropriations processes. We are sending copies of this report to the Chairman and Ranking Minority Member of the Senate Committee on Appropriations and its Subcommittee on Commerce, Justice, State, and the Judiciary; the Chairman and Ranking Minority Member, House Committee on Appropriations, and its Subcommittee on Commerce, Justice, State, the Judiciary, and Related Agencies. We will also send copies to the Chairman of SEC and will make copies available to others upon request. The report is also available at no charge on the GAO Web site at http:/www.gao.gov. If you or your staff have any questions regarding this report, please contact me or Orice M. Williams at (202) 512-8678. To identify the existing self-funding structures used by Congress and the extent of control afforded to the appropriators under each structure, we interviewed officials from the Securities and Exchange Commission (SEC), the Office of the Comptroller of the Currency (OCC), the Office of Thrift Supervision (OTS), and the National Credit Union Administration (NCUA) to obtain information on their budget structures and processes. Previously, we had discussed these issues with the Federal Deposit Insurance Corporation (FDIC) and the Federal Reserve System (FRS). In addition, we reviewed previous GAO work on the structure of other self- funded agencies, such as the Farm Credit Administration (FCA) and the Office of Federal Housing Enterprise Oversight (OFHEO). We then compared SEC’s self-funding structure to that of other federal financial regulators and analyzed the degree of control afforded to the appropriators under each structure. To determine the implications for SEC operations and congressional and executive branch oversight, we interviewed SEC officials regarding the impact of self-funding on SEC operations. We met with the SEC Chairman to obtain his views on self-funding. We interviewed financial regulators about the impact of self-funding on their respective agencies, and about the challenges and benefits associated with self-funding. We also interviewed representatives from the Senate and House CJS appropriations subcommittees, and officials from the Office of Personnel Management (OPM) and the Office of Management and Budget (OMB) to obtain their views on shifting budgetary control to SEC. Finally, we reviewed relevant GAO reports on SEC operations to identify existing issues. We did our work in Washington, D.C., between February and July 2002, in accordance with generally accepted government auditing standards. In addition to the persons named above, M’Baye Diagne, Edda Emmanuelli-Perez, Denise Fantone, Edwin Lane, Barbara Roesmann, and Karen Tremba made key contributions to this report.
GAO studied the implications of converting the Securities and Exchange Commission (SEC) to a self-funded entity. Congress has created a range of self-funding structures, or other sources of funding, other than appropriations for the Department of the Treasury's general fund. The variations among these agencies depend on how and when Congress makes the fees available to an agency and how much flexibility Congress gives an agency in using its collected fees without further legislative action. Moving SEC to a more self-controlled funding structure has implications for two important areas. First, SEC would have more control over its own budget and funding level, which some SEC and industry officials believe may better enable SEC to address its increasing workload and some of its human capital challenges, such as its ability to recruit and retain quality staff. The second result would be a loss of checks and balances currently provided by the federal budget and appropriations processes. Moving SEC to a self-controlled funding structure would diminish congressional and executive branch oversight. On the other hand, the congressional authorizing committees would maintain or else could choose to increase their oversight of SEC. However, if Congress wanted to give SEC greater budget flexibility but still maintain some degree of control over SEC's funding level, it could place limitations on SEC's offsetting collections.
Several different federal agencies are involved with the implementation of EEOICPA’s Subtitle B program, including Labor, CDC’s NIOSH, and Energy. Labor’s Office of Workers’ Compensation Programs is responsible for adjudicating and administering claims filed by workers, former workers, or certain eligible survivors under the act. NIOSH, as part of the Centers for Disease Control and Prevention within the Department of Health and Human Services, is responsible for performing several technical and policy-making roles in support of Labor’s program, including establishing by regulation methods for arriving at reasonable estimates of radiation doses received by an individual at a covered facility; establishing by regulation guidelines to be used by Labor to determine whether an individual sustained a cancer in the performance of duty for purposes of the compensation program if, and only if, the cancer was “at least as likely as not” related to the radiation dose received by the employee; establishing procedures for considering petitions to be added to the special exposure cohort; and providing the Advisory Board on Radiation and Worker Health with administrative and other necessary support services. EEOICPA specified that the President appoint an Advisory Board to advise the Secretary, HHS, on its activities under the act. The Advisory Board, which is composed of scientists, physicians, and workers, advises the Secretary, HHS, on the development of methods used to perform dose reconstructions and guidelines to be used to assess the likelihood that an employee’s cancer is “at least as likely as not” related to work-related radiation exposure, the scientific validity and quality of dose reconstruction efforts performed, and the addition of employees to the special exposure cohort. Energy is responsible for providing Labor and NIOSH information to assist with processing claims. This information includes such things as employment verification, information specifying the estimated radiation dose of that employee during each employment period claimed, and facilitywide monitoring data. Several requirements must be met for a claimant to be eligible for compensation under Subtitle B. For a worker (or eligible survivor) to qualify for benefits, the worker must have worked at a covered Energy facility or at a beryllium vendor facility, or for an atomic weapons employer during a covered time period, and developed one of the specified illnesses associated with exposure to radiation, beryllium, or silica. Covered medical conditions include all cancers (except chronic lymphocytic leukemia), beryllium disease, and chronic silicosis. When a claim is filed, it is assigned to one of Labor’s four district offices— Jacksonville, Florida; Cleveland, Ohio; Denver, Colorado; or Seattle, Washington—based on the geographical location of the covered worker’s last employment. Upon receipt of a claim, Labor determines whether the Subtitle B claimant meets eligibility requirements for one of three claim types: RECA Section 5 supplement claims; beryllium, silicosis, and special exposure cohort cancer claims; and cancer claims not covered by special exposure cohort provisions. For the purposes of our report, we have grouped these three types of claims into two categories, based on whether or not the claims are referred to NIOSH for dose reconstruction during processing. As figure 1 shows, claims that are not referred to NIOSH for dose reconstruction include RECA Section 5 supplement claims and beryllium, silicosis, and special exposure cohort cancer claims. Claims that are referred to NIOSH for dose reconstruction include cancer claims not covered by special exposure cohort provisions. Depending on the type of claim, Labor must complete certain claims- processing tasks before a decision can be made as to whether the claimant should receive compensation. Claims for the $50,000 RECA Section 5 supplement are the least complex. For these, Labor verifies with the Department of Justice that an award determination has previously been made and documents the identity of the claimant. For claims involving beryllium disease, silicosis, or a specified cancer for workers at a special exposure cohort facility, the employment and illness are verified. After the verification is completed for a claim, Labor develops a recommended decision that is issued to the claimant. The claimant may agree with the recommended decision or may object and request either a review of the written record or an oral hearing. In either case, the Final Adjudication Branch (a separate entity within Labor’s Office of Workers’ Compensation Programs) will review the entire record, including the recommended decision and any evidence or testimony submitted by the claimant and will issue a final decision. A claimant can appeal the decision in the U.S. District Courts or have the case reopened if new evidence is provided to Labor. Other claims are referred to NIOSH for dose reconstruction. Such claims include those involving a claimed cancer not covered by the special exposure cohort provisions. Before a determination of compensability can be made, a dose reconstruction must be conducted for the probability of causation to be established. In these instances, once Labor determines a worker was a covered employee and that he or she had a diagnosis of cancer, the case is referred to NIOSH. Using scientific and other collected information, NIOSH performs a dose reconstruction and provides the results to Labor. Labor uses these results to assess whether the employee’s cancer was “at least as likely as not” related to the radiation dose received by the employee in order to determine compensability. The purpose of a dose reconstruction is to characterize the extent to which workers were exposed to radiation present in the workplace and to assist Labor in determining the probability that a person’s cancer was “at least as likely as not” caused by radiation. Dose reconstructions rely on information that was periodically collected to monitor radiation levels by Energy or other covered facilities and on information collected during interviews with the claimant. For example, when such information is available, NIOSH officials gather information that was collected to monitor a worker’s radiation exposure, such as readings from a worker’s monitoring badges, urinalysis results, and radon monitoring results. They also obtain information from workplacewide monitoring readings, such as general air-sampling results, radon monitoring results, and work-required medical screening x-rays. NIOSH officials also conduct interviews with claimants to obtain information on their employment history, how they were monitored for radiation exposure, whether they were aware of any particular incidents during which they may have been exposed to radiation, and whether medical screening had indicated they may have been exposed to radiation. In cases where NIOSH officials cannot fully characterize the likely level of radiation exposure, they estimate the level of exposure using reasonable scientific assumptions that give the claimant all the benefit of the doubt, according to NIOSH officials. Compensation is limited to $150,000 per worker for all claims that are not related to RECA Section 5 supplements. When multiple survivors of the same worker file claims, the compensation amount is divided among eligible survivors. Moreover, while multiple claims associated with a single worker may be filed with Labor, only one dose reconstruction is needed in such instances. See appendix II for detailed information about the claim- processing steps used by Labor and NIOSH. In the first 2½ years of the program—July 31, 2001, through January 31, 2004—Labor had fully processed 83 percent of claims not referred to NIOSH for dose reconstruction. During the first year of the program, Labor was not able to meet one of its primary Government Performance and Results Act of 1993 (GPRA) timeliness goals, but since then GPRA goals have been met and Labor has set higher goals for the future. Labor also established interim goals for processing claims. In addition, Labor has instituted various procedures to promote consistency, including conducting accountability reviews and updating its procedures manual. As of January 31, 2004, Labor had fully processed 83 percent of the nearly 30,000 claims for benefits under Subtitle B that had not been referred to NIOSH for dose reconstruction. As shown in figure 2, an additional 16 percent of claims were in processing, and less than 1 percent had not yet begun processing. Of the claims that were fully processed, 94 percent had final decisions, and the remainder had been closed without a final determination for administrative reasons. Forty-two percent of claims with final decisions were approved, resulting in more than $625 million in lump-sum compensation payments. The remaining 58 percent were denied, in most instances because they did not meet medical or employment eligibility criteria. On average, it took about 7 months to fully process claims not needing dose reconstruction. As of January 31, 2004, the majority of approved claims that were not referred to NIOSH for dose reconstructions reported cancer as a claimed illness, and Labor had reimbursed claimants whose claims were approved nearly $22 million in medical and travel-related expenses. About 55 percent of approved claims not referred for dose reconstruction claimed cancer, 12 percent reported chronic beryllium disease, 10 percent reported beryllium sensitivity, and 4 percent reported chronic silicosis. Approved claimants with ongoing medical and travel-related expenses related to the occupational illness for which they were compensated under Subtitle B are entitled to reimbursement for these expenses. As shown in figure 3, more than half of the nearly $22 million paid was reimbursement for claimants’ hospital expenses. Labor has generally met the two broad GPRA goals it established for timeliness of processing Subtitle B claims, as shown in table 1. These goals were (1) to complete the initial processing of claims within specified time periods, depending upon the type of claim, 75 percent of the time, and (2) to complete the final decision processing of claims within specified time periods 75 percent of the time. Labor did not establish different GPRA goals for claims not referred to NIOSH for dose reconstruction versus those needing dose reconstruction; rather, the GPRA goals are overall goals that apply to Labor’s processing of all Subtitle B claims. The initial processing time frames and the final decision processing time frames encompass all of the activities Labor must complete to fully process a claim. In its fiscal year 2002 annual report, Labor stated that it set these GPRA goals to provide a clear indication to claimants that their claims would be processed efficiently. The report further stated that the agency wanted to send a strong message to the new program’s staff that they should share this strong commitment in processing claims. In its 2003 strategic plan, Labor indicated that it planned to set higher processing goals through 2008 by increasing the goals by 2 percentage points each year. Labor officials cited several factors that contributed to not meeting the GPRA goal for initial processing in fiscal year 2002. For example, in the program’s first year, the district offices received more than 34,000 claims and actually had a backlog of claims to process even before they began operating the program on July 31, 2001. In addition, several start-up problems, most notably unanticipated delays in obtaining the employment information from Energy necessary to proceed with initial claims processing, also prevented Labor from achieving this goal during the first year, according to Labor officials. Labor officials also stated that they have addressed many of these initial problems and Energy has greatly improved its responsiveness rate. Labor officials report that Energy is typically responding to a request within 30 days, which exceeds Labor’s goal of obtaining a response from Energy within 60 days. Labor officials have also used other sources, such as labor unions, to help provide necessary employment verification. To assist Labor officials in knowing how well claims are being processed, and ultimately meeting its GPRA goals, Labor has also established a number of interim processing goals. These interim processing goals specify time frames for completing activities such as initiating the employment and illness verification process and issuing the lump-sum payments. Initially, the district offices had difficulty meeting some of these interim goals. However, over time they have been better able to meet these goals. For example, in fiscal year 2002, Labor set an interim goal to initiate the employment and illness verification process within 25 days 90 percent of the time. While the district offices achieved only a 76 percent rate in fiscal year 2002, they improved their rate to over 98 percent in fiscal year 2003. Similarly, in fiscal year 2002, an interim goal was set to issue a lump- sum payment to a claimant within 15 days of approving a claim 90 percent of the time. District offices achieved a 77 percent rate in fiscal year 2002 but improved their performance to achieve a 93 percent rate in fiscal year 2003. Labor has taken several steps to help ensure that Subtitle B claims are processed consistently. For example, Labor requires that claim decisions undergo several levels of review. After a claims examiner develops a recommended decision, a senior claims examiner reviews that recommended decision, and a claims manager, who reviews a sample of such decisions, might review it as well. Labor’s Final Adjudication Branch then reviews the recommended decision before making a final decision and awarding compensation, if appropriate. If during any of these reviews the reviewer determines that there was not enough information to make a decision, the case is sent back to the claims examiner for further development. To further promote consistency, Labor performs accountability reviews each year on the EEOICPA program as it does with its other similar compensation programs. In completing the reviews, Labor samples claims in each of its four district offices as well as its Final Adjudication Branch offices. The purpose of the reviews is to assess the quality of work being performed in each office and to guide managers in developing training and implementing any needed corrective actions. The reviews focus on such tasks as processing claims in a timely manner, making payments appropriately, assigning staff to appropriate roles, and coding claims appropriately in the case management system. The accountability reviews have proven very useful in identifying training needs, according to Labor officials. For example, after an accountability review showed that actions had been taken in some claims but were not reflected through status codes in the case management system, some district offices held training courses to help their claims examining staff better understand how to use codes properly. In addition to providing training, the district offices are required to correct any problems identified during the reviews. Labor officials told us they expect to continue to conduct accountability reviews each year. Labor has also taken steps to improve staff access to updates in claims- processing procedures. Some district offices raised concerns that the procedures manual, originally issued in January 2002, did not always reflect Labor’s most recent guidance and needed to be revised. For example, a supervisor in one of the district offices said that the bulletins announcing changes to the system are not available from a central source and that he has struggled at times to determine the proper procedure. According to Labor officials, because the program is relatively new and the law was vague in some areas, Labor has issued many different policies to define how staff should handle different situations. In addition, guidance was not always centrally located because, in issuing policy clarifications, Labor did not consistently use one format; rather, it issued policies in bulletins, e-mails, and documentation of telephone calls. To address the district offices’ concerns, Labor created a task force composed of 10 team members, including staff from the four district offices and headquarters. The task force is working to develop a comprehensive procedures manual that would include all the bulletins, teleconference calls, and other communications containing policy changes that have been issued since the beginning of the program. Officials said that they are in the final stages of completing the manual. In the first 2½ years of the program—July 31, 2001, through January 31, 2004—Labor and NIOSH fully processed about 9 percent of the claims referred to NIOSH for dose reconstruction, leaving a large backlog of these claims. NIOSH officials report that the backlog resulted because time was needed to develop the necessary regulations and get staff and procedures in place for performing dose reconstructions. NIOSH now has its staff and procedures in place and has an extensive effort under way to complete site profiles that expedite the dose reconstruction process. However, NIOSH’s time frame for completing the remaining profiles is uncertain, and as a result, some claims associated with facilities that do not have site profiles may take a considerable period of time to be fully processed. To ensure the consistency of claim decisions, NIOSH’s Advisory Board is overseeing an effort to evaluate dose reconstruction decisions and site profiles. Finally, with the recent issuance of special exposure cohort regulations, the backlog of claims needing dose reconstructions may be reduced if additions are made to the special exposure cohort, thereby eliminating the need for performing dose reconstructions on these claims. As of January 31, 2004, Labor, using dose reconstructions provided by NIOSH, had fully processed relatively few of the claims referred to NIOSH for dose reconstruction. Of the more than 21,000 claims requiring dose reconstruction, 9 percent were fully processed, 91 percent were in processing, and less than 1 percent had not yet begun processing, as shown in figure 4. Of the 9 percent that had been fully processed, 64 percent had final decisions, while the remaining claims were closed for administrative reasons. Our analysis showed that dose reconstructions had been started for about one-third of the claims that were in processing. The remaining claims were either waiting or undergoing development prior to the initiation of the dose reconstruction. In some cases where a site profile has not yet been developed, these claims are essentially on hold until the site profile is developed. Fifty-one percent of claims with final decisions as of January 31, 2004, were approved, resulting in $65 million in lump-sum compensation payments. Forty-nine percent were denied because the results of the dose reconstruction were used by Labor to determine that the claimed illness was not “at least as likely as not” to have been caused by work-related radiation exposure. However, approval rates for cases with final decisions have subsequently decreased, and as of July 2004, Labor officials reported that the approval rate for cases that required dose reconstruction was about 30 percent. Claims referred to NIOSH for dose reconstruction have taken longer to fully process than those that do not require dose reconstruction, and some claims in processing at NIOSH may face a long wait for dose reconstruction before returning to Labor for decisions. Of the Subtitle B claims that were fully processed, as of January 31, 2004, those that required dose reconstruction took an average of about 17 months to fully process, compared with about 7 months for claims that did not require dose reconstruction. However, the claims requiring dose reconstruction that had not yet been fully processed had already been pending for an average of 19 months. Approximately 15 of these months, on average, had been spent in processing at NIOSH and 4 months had been spent in processing at Labor. All approved claims that had required dose reconstruction reported cancer, and Labor reimbursed claimants for more than $3 million in medical and travel-related expenses as of January 31, 2004. Almost all of the reimbursements were for hospital and physician expenses, as shown in figure 5. Unlike Labor, which was able to immediately begin processing claims at the start of the program on July 31, 2001, NIOSH needed time to develop the necessary regulations and to get staff and procedures in place to perform dose reconstructions. Two necessary regulations were finalized in May 2002. In a May 2004 report to Congress, NIOSH reported that many of the key program pieces, such as recruiting and training staff, were not completed until 2003, contributing to the delays in its ability to complete dose reconstructions. NIOSH also highlighted the difficulties it has encountered in collecting information from Labor, Energy and other employers, and claimants. For instance, NIOSH reported that information such as employment history and cancer diagnosis provided by Labor is, at times, inaccurate or incomplete. NIOSH also reported that obtaining information from Energy or other employers has been difficult because individual exposure records cannot always be located. Finally, while the intent of conducting an interview with the claimant is to obtain useful information, NIOSH officials report, however, that this will not hinder a dose reconstruction. NIOSH has been working to improve its ability to develop dose reconstructions and address its backlog of claims needing dose reconstruction. In March 2004, the Director of NIOSH testified that NIOSH has steadily increased its capacity to complete dose reconstructions and that much of the program’s development is complete. NIOSH officials stated that they continue to work with Labor staff to establish a better understanding of what information, such as ethnicity and smoking history, is needed by NIOSH to perform a dose reconstruction, and officials stated that Labor is now typically providing this information. In addition, NIOSH has worked with Energy facilities to provide requested information in a more timely fashion. Improvements have been made in this area, and officials report that Energy generally provides the information within NIOSH’s time frame of 60 days. While NIOSH officials are working with claimants to better educate them about the information NIOSH wants to collect during the interview, NIOSH officials said that it was important to realize that these interviews are voluntary and are not the sole source of information. Information provided during the interviews is helpful, but a dose reconstruction is not dependent upon an interview being conducted, according to NIOSH officials. While NIOSH reports that it has improved its ability to complete dose reconstructions, it has not established any performance goal for the overall timeliness of processing the claims referred to NIOSH for dose reconstruction. Specifically, no GPRA goals were established in fiscal year 2002 or 2003 for NIOSH’s processing of Subtitle B cases, but a GPRA goal, covering part of the dose reconstruction process, was established for fiscal year 2004. Despite not having GPRA goals earlier in the program, NIOSH did establish and track some interim processing goals. NIOSH did not want to establish any overall timeliness goal for completing dose reconstructions, but rather wanted staff to complete them in as scientifically sound and efficient a manner as possible. NIOSH’s GPRA goal for fiscal year 2004 is to have draft dose reconstructions sent to 80 percent of all claimants within 60 calendar days of the claim being assigned to staff to perform a dose reconstruction. As of July 2004, NIOSH officials reported that currently, an average of 70 days was required to conduct a dose reconstruction after a case was assigned to a dose reconstructionist. While NIOSH has developed innovative solutions to process claims from more than 70 different sites regardless of whether a site profile exists, the majority of these claims typically involve facilities that do have a site profile either completed or partially completed. However, since claims associated with facilities that do not have site profiles are typically not assigned to staff for dose reconstruction, it is possible that NIOSH could meet the GPRA goal and that some claimants could still wait a considerable period of time to have their cases fully processed. NIOSH has accelerated the rate at which it is completing dose reconstruction. For example, it took NIOSH a little more than 2 years from when it received its first referral from Labor to complete the first 1,000 dose reconstructions. In contrast, NIOSH completed the second 1,000 dose reconstructions in less than 4 months and the third 1,000 dose reconstructions in 11 weeks. NIOSH established a target of completing 8,000 dose reconstructions in fiscal year 2004. To assist in meeting this goal, NIOSH is aiming to complete 200 dose reconstructions per week. As of June 2004, NIOSH was averaging about 150 dose reconstructions a week and had completed about 2,100 dose reconstructions in the first 9 months of fiscal year 2004. To facilitate the dose reconstruction process, NIOSH is developing site profiles that compile information such as hazardous materials present at the site, facilitywide monitoring information, and information on workers at the site who may have been exposed to radiation. NIOSH officials believe that these site profiles will enhance the efficiency of performing dose reconstructions by eliminating the need to duplicate efforts in gathering information. The site profiles for larger sites consist of six documents, which are called Technical Basis Documents: an introduction, a site description, an occupational medical dose document, an occupational environmental dose document, an occupational internal dose document, and an occupational external dose-monitoring document. NIOSH officials are also compiling worker profiles, which provide information on the worker’ job, work location within the facility, and time periods worked. NIOSH sometimes uses the worker profiles to obtain proxy information when some information is not available for a particular claimant. NIOSH initially expected to conduct dose reconstructions while developing site profiles for the facilities involved but encountered difficulties in doing so. By pursuing both efforts at the same time, NIOSH officials had hoped to avoid facing a backlog of claims by completing a substantial number of dose reconstructions. However, NIOSH determined that it was necessary to first complete the site profiles to complete a high volume of dose reconstructions because it was too inefficient to collect general site-related information on a case-by-case basis. In addition, while Energy has supported NIOSH’s efforts in locating site-specific information, there have been some delays in providing this information, particularly when the information requested is from classified documents. When requests for classified documents are made, delays have occurred because of the time needed for Energy to comply with procedures for ensuring national security. NIOSH currently has an extensive effort under way to develop site profiles, and this effort has helped expedite the processing of claims. NIOSH has established over a dozen teams, each composed of three to six experts, and made each team responsible for developing a different site profile. NIOSH prioritized its efforts by targeting those facilities that have the largest number of claims needing dose reconstruction; 15 of the 30 sites NIOSH anticipates completing a site profile for represent about 80 percent of the claims submitted for dose reconstruction. As of June 2004, 11 site profiles were fully completed, while 9 other site profiles were partially completed (see table 2). In cases where a site profile has been completed, NIOSH has been able to better process the claims needing dose reconstruction associated with those facilities. For instance, since the first of six Technical Basis Documents for the Savannah River site profile was approved, in July 2003, NIOSH had completed about 500 dose reconstructions for that site by January 31, 2004, whereas NIOSH had completed fewer than 10 dose reconstructions for that site prior to July 2003. While some site profiles are only partially completed, NIOSH is still able to use the completed Technical Basis documents, as applicable, to develop dose reconstructions. For example, at the Idaho National Engineering and Environmental Laboratory (INEEL), the occupational internal dose document is still being finalized. However, if NIOSH has a claim that only needs the use of other INEEL site profile documents that are finalized, such as the occupational medical dose document or the occupational external dosimetry documents, a dose reconstruction can be developed for this claim. In addition, completed site profiles may be modified as additional relevant information is identified and incorporated. Claims originally denied based upon a prior version of a site profile are re- examined to determine the effect that the new information may have on the compensability of the claim. In turn, Labor can make any appropriate modifications to its earlier claim decisions. Despite efforts to complete the remaining site profiles, NIOSH officials said that their time frame for completing these profiles is uncertain. The site profiles that have been completed have taken on average 4 to 6 months to complete. NIOSH reported that the pace at which it can complete additional site profiles is constrained by the limited expert resources available to conduct this specialized work and by the complexity of the history and variety of operations at particular sites. In addition, NIOSH officials said that it generally takes longer to complete site profiles for atomic weapons employer sites because many of these sites are no longer operating or are privately owned, making it difficult to locate records. Because the number of available staff needed to complete site profiles is very limited, NIOSH officials stated that they have had to balance their use of these resources. As site profiles are completed, resources are reallocated to assist with the completion of additional site profiles. HHS’s Advisory Board has a major effort under way to ensure claims decisions are being made consistently. Specifically, HHS’s Advisory Board is responsible under the statute for (1) reviewing a reasonable sample of individual dose reconstructions for scientific validity and quality, (2) advising on the development of guidelines to determine probability of causation and methods for dose reconstruction, and (3) reviewing special exposure cohort petitions. To assist the Advisory Board, HHS entered into a contract with an organization in October 2003 to carry out some of these tasks. The contractor is currently developing its plans for completing these tasks and expects to conduct the evaluation over the next 5 years and to provide interim status reports each year. Performing an independent review to examine the consistency of individual dose reconstructions decisions is an important aspect of effective program management for the Subtitle B program. In the past, GAO reported concerns that a similar program that compensates veterans with diseases caused by radiation exposure did not have an independent review of its dose reconstructions. Such a review could result in greater public confidence and mitigate concerns about dose reconstructions. NIOSH officials have also stated that the evaluation of the individual dose reconstructions and site profiles is an important exercise to complete. The Chair of the Advisory Board said that while the board is confident in what NIOSH’s findings have been to date, it is important to have an independent review completed in order to validate these findings. After HHS had twice received public comments on proposed regulations concerning how individuals or groups could apply for special exposure cohort status, the agency issued final regulations on May 28, 2004. The Secretary of HHS is responsible for developing procedures for considering petitions to be added to the special exposure cohort. HHS originally published a proposal for these procedures on June 25, 2002, and subsequently received a number of public comments. Many of these comments pertained to the feasibility of completing dose reconstructions and establishing time limits for completing dose reconstructions. Because HHS needed to make substantial changes to the procedures to address public comments, the agency issued a second notice of proposed rule making on March 7, 2003, and solicited public comments through May 6, 2003. Again many of the comments related to completing dose reconstructions in a feasible and timely manner. HHS’s regulations establish procedures that describe how petitions can be submitted and reviewed for special exposure cohort consideration. These requirements are intended to ensure that petitions are submitted by authorized parties, are justified, and receive uniform, fair, and scientific consideration. The procedures are also designed to give petitioners and interested parties opportunities for appropriate involvement in the process. The procedures are not intended to provide a second opportunity to qualify a claim for compensation, once NIOSH has completed a dose reconstruction and Labor has determined that the claimed cancer was not “at least as likely as not” to have been caused by the estimated radiation doses. With the implementation of the regulations, some of the claims in NIOSH’s backlog could be eligible for special exposure cohort status and consequently reduce the backlog of claims requiring dose reconstruction. If a petition to add a particular group to the special exposure cohort is submitted and approved, NIOSH would not need to develop an individual dose reconstruction for such a claim. Rather, Labor would verify the claimant’s employment and illness and follow the review process currently used for existing special exposure cohort groups. As of July 30, 2004, NIOSH had received eight special exposure cohort petitions and was determining whether the petitions were eligible for consideration. Labor’s procedures and practices have helped the agency to fully process most of the claims that had not been referred to NIOSH for dose reconstruction. Because this program is relatively new, Labor has issued many different policies to define how staff should handle different situations and is working to develop a comprehensive procedures manual that would contain these policies. In addition, the accountability reviews performed each year have allowed Labor to identify and correct problems as they occur and provide additional training to staff as needed. To ensure consistency in the processing of claims during this period of change, it will continue to be important for Labor to maintain these ongoing efforts. In contrast, relatively few claims requiring dose reconstructions have been fully processed. NIOSH faces the challenge of balancing multiple objectives—scientific soundness and timeliness—in completing dose reconstructions. However, while NIOSH has placed considerable focus on ensuring scientific soundness, it has not established a clear vision for claimants or the Congress with regard to the time frames within which they can expect dose reconstructions to be completed. NIOSH established a GPRA goal for fiscal year 2004 that specifies a time frame for completing draft dose reconstructions once a claim is assigned to staff to perform a dose reconstruction. However, claims associated with facilities that do not have site profiles are typically not assigned to staff for dose reconstruction, and this waiting period is not reflected in the GPRA goal. NIOSH learned from its initial implementation experience that completing site profiles is a critical element for efficiently processing claims requiring dose reconstruction. While NIOSH had completed 11 site profiles and partially completed 9 profiles as of June 2004, it had not established any time frames for completing these 9 site profiles or the remaining 10 site profiles that it expects to develop. Without such time frames, claimants do not have a good understanding of when their dose reconstruction might be completed. While it is important to avoid the extreme of establishing time frames that are unreasonable and would set up NIOSH for failure, it is equally important to avoid the other extreme of not setting any expectations for the timely completion of dose reconstructions for which site profiles have not been completed. Moreover, now that NIOSH has more experience in developing site profiles, it is in a better position to identify and take account of factors that can lead to differences in the amount of time required to complete site profiles for different facilities. To enhance program management and promote greater transparency with regard to the timeliness of completing dose reconstructions, we recommend that the Secretary of HHS direct CDC officials to establish time frames for completing the remaining site profiles. We provided a draft of this report to both Labor and HHS for comment. Labor did not have any comments on the report. HHS said that the report was balanced, thorough, and constructive, and that it agreed with GAO’s recommendation to establish time frames for completing the remaining site profiles. HHS also provided updated information on the number of site profiles already completed and the total number of site profiles that it anticipates compiling, and we revised the report to incorporate this information. HHS added that it has used innovative solutions to complete dose reconstructions in some instances in which site profiles do not exist and we modified the report to incorporate this information. Moreover, HHS provided additional information to explain how completed site profiles function as “living documents” and are modified as additional relevant information is identified. Finally, HHS raised questions about the accuracy of certain statistics we cited about cases that had been fully processed by Labor, while acknowledging that Labor is a more authoritative source on this topic. We believe that these statistics accurately describe what they were intended to measure, and Labor did not raise any issue about their accuracy; hence, we did not revise the figures. HHS’s comments are provided in appendix IV. HHS also provided technical comments, which we have incorporated as appropriate. Copies of this report are being sent to the Secretary of Labor and the Secretary of Health and Human Services, appropriate congressional committees, and other interested parties. The report will able be made available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-7215. Other contacts and staff acknowledgments are listed in appendix VI. To determine how well the Department of Labor’s (Labor) procedures and practices ensure the timely and consistent processing of claims that are not referred to the National Institute of Occupational Safety and Health (NIOSH) for dose reconstruction but are being processed by Labor, we reviewed Labor’s regulations, procedures, and practices related to processing claims. In addition, we interviewed officials from Labor’s Office of Workers’ Compensation Programs and its four district offices in Cleveland, Ohio; Denver, Colorado; Jacksonville, Florida; and Seattle, Washington to discuss their procedures and practices. In addition, we obtained and analyzed information on Labor’s Government Performance and Results Act (GPRA) goals and interim processing goals for fiscal year 2002 through the second quarter of fiscal year 2004. We also obtained and analyzed accountability review documents for fiscal years 2002 and 2003. We interviewed Labor officials to obtain information on the department’s efforts to revise its procedures manual used by staff in processing claims. Last, for background purposes, we interviewed several claimants and Energy Employees Occupational Illness Compensation Act (EEOICPA) experts regarding their knowledge of or experiences with Subtitle B claims processing. To determine how well Labor’s and the NIOSH procedures and practices ensure the timely and consistent processing of claims that are referred to NIOSH for dose reconstruction, we reviewed Labor’s and NIOSH’s regulations, procedures, and practices related to processing claims. Along with the work we performed at Labor as described earlier, we interviewed officials from NIOSH’s Office of Compensation Analysis and Support (OCAS) to discuss their procedures and practices. In addition, we obtained and analyzed information on NIOSH’s GPRA goals and interim processing goals for fiscal year 2002 through the second quarter of fiscal year 2004. We also reviewed several of the completed site profiles and obtained information on NIOSH’s time frames for completing additional site profiles needed to assist with the dose reconstruction process. We reviewed recently introduced regulations for considering petitions to be added to the special exposure cohort as well as different pieces of legislation introduced that would establish additional sites as special exposure cohort sites. We also interviewed the Advisory Board chair and reviewed key documents pertaining to the evaluation of dose reconstructions that the Advisory Board is overseeing. Last, for background purposes, we interviewed several NIOSH contract staff, claimants, and EEOICPA experts regarding their knowledge of or experiences with Subtitle B claims processing. To determine the number, status, and other characteristics of Subtitle B claims filed through January 31, 2004, we analyzed administrative data extracted from Labor’s and NIOSH’s case management systems for applications filed from the beginning of the program—July 31, 2001— through January 31, 2004. Neither agency publishes standardized data extracts from their systems, so we requested that they provide customized extracts for our analysis. Specifically, we received an extract from the NIOSH Office of Compensation Analysis and Support Claims Tracking System (NOCTS) and several files extracted from Labor’s Energy Cases Management System (ECMS) and Energy Medical Bill Processing Subsystem (EMBPS). Because multiple claims can be associated with a single worker, the systems and the extracts received from both agencies contain data collected at two levels—the case level and the claim level. For example, if multiple children of a deceased worker file claims, all claims will be associated with a single case, which is linked to the worker. At the case level, the extracts contained information about the worker, such as date of birth and date of death (if applicable), the facilities at which the employee worked, the employee’s dates of employment, and the status of the case as it moves through the development process. At the claim level, the extracts contained information related to the individual claimants, such as the date the claim was signed, the claimant’s relationship to the worker, and the status of the claim as it progressed through processing. The Labor files were merged to produce claim- and case-level data files and were subsequently merged with the NIOSH extract. Throughout this report, we have reported our statistics at the claim level. Where case-level statistics have been reported, they have been merged with the claim-level data so that they could be reported at the claim level. We interviewed key Labor and NIOSH officials and contractors and reviewed available system documentation, such as design specifications and system update documents. We tested the data sets to determine that they were sufficiently reliable for our purposes. Specifically, we performed electronic testing to identify missing data or logical inconsistencies. We did not assess the quality of Labor’s claims decisions. We then computed descriptive statistics, including frequencies and cross-tabulations, to determine the number and status of claims received as of January 31, 2004. In order to provide more current information, we interviewed Labor and NIOSH officials to obtain updated information on the approval rates for cases that required dose reconstructions as of July 2004. We also interviewed NIOSH officials to obtain information on the average time taken to draft dose reconstructions as of July 2004 and the number of dose reconstructions completed in the first 9 months of fiscal year 2004. requirements? is it? Claims is referred to NIOSH for dose reconstruction NIOSH obtains worker's and workplace monitoring information from Energy and other sources as appropriate decision? Labor’s Final Adjudication final decision? While Labor makes and reports its decisions on the claim level, NIOSH reports its dose reconstruction results on the case level because only one dose reconstruction is completed for each worker regardless of the number of claims filed by survivors. Table 3 presents information on the status of claims referred to NIOSH for dose reconstructions at both the claim and case levels as of January 31, 2004. In addition to the above contacts, Melinda L. Cordero and Rosemary Torres Lerma made significant contributions to this report. Luann Moy and William Bates assisted with methodology and data analysis, Margaret Armen provided legal support, and Amy E. Buck assisted with the message and report development.
Subtitle B of the Energy Employees Occupational Illness Compensation Program Act, administered by the Department of Labor (Labor), provides eligible workers who developed illnesses from their work, or their survivors, with a onetime total payment of $150,000, and coverage for medical expenses related to the illnesses. For some claims, Labor uses radiation exposure estimates (dose reconstructions) performed by the National Institute for Occupational Safety and Health (NIOSH), part of the Department of Health and Human Services' (HHS) Centers for Disease Control and Prevention (CDC), to determine if the illness claimed was "as least as likely as not" related to employment at a covered facility. GAO was asked to determine (1) how well Labor's procedures and practices ensure the timely and consistent processing of claims that are not referred to NIOSH for dose reconstruction but are being processed by Labor and (2) how well Labor's and NIOSH's procedures and practices ensure the timely and consistent processing of claims that are referred for dose reconstruction. GAO did not assess the quality of Labor's claims decisions. In the first 2 and 1/2 years of the program--July 31, 2001, through January 31, 2004--Labor had fully processed 83 percent of the nearly 30,000 claims that had not been referred to NIOSH for dose reconstruction; these claims correspond to nearly 23,000 cases for individual workers. (Multiple claims can be associated with a case as eligible survivors may each file claims.) Labor took an average of 7 months to fully process these claims. About 42 percent of claims with final decisions were approved, resulting in $625 million in lump-sum compensation payments. The remaining 58 percent of claims with final decisions were denied--the majority because they did not meet medical or employment eligibility criteria. Labor generally met its timeliness goals for processing claims and is working to ensure that claims are processed consistently by conducting accountability reviews and creating a task force to update its procedure manual. In the first 2= years of the program, Labor and NIOSH had fully processed about 9 percent of the more than 21,000 claims (which correspond to about 15,000 cases) that were referred to NIOSH for dose reconstructions, taking an average of 17 months to fully process claims. Fifty-one percent of the processed claims were approved, and Labor has paid out about $65 million in lump-sum compensation. Forty-nine percent were denied because it was determined that the claimed illness was not at least as likely as not related to employment at a covered facility. A backlog of claims needing dose reconstruction developed because NIOSH needed time to get the necessary staff and procedures in place to complete the dose reconstructions and develop site profiles. Efforts are under way to develop site profiles that contain facility-specific information that is useful in completing dose reconstructions. However, processing claims associated with facilities that do not have site profiles, in some instances, has essentially stopped, and NIOSH has not established a time frame for completing these remaining site profiles because of limited expert resources and site complexities. As a result, some claimants could wait a considerable period of time to have their claims fully processed. To help ensure the consistency of claim decisions, HHS's Advisory Board is conducting an independent external evaluation of dose reconstruction decisions and site profiles.
Dramatic increases in computer interconnectivity, especially in the use of the Internet, continue to revolutionize the way our government, our nation, and much of the world communicate and conduct business. The benefits have been enormous. Vast amounts of information are now literally at our fingertips, facilitating research on virtually every topic imaginable; financial and other business transactions can be executed almost instantaneously, often 24 hours a day; and electronic mail, Internet Web sites, and computer bulletin boards allow us to communicate quickly and easily with a virtually unlimited number of individuals and groups. However, this widespread interconnectivity poses significant risks to the government’s and our nation’s computer systems and, more important, to the critical operations and infrastructures they support. For example, telecommunications, power distribution, water supply, public health services, national defense (including the military’s warfighting capability), law enforcement, government services, and emergency services all depend on the security of their computer operations. The speed and accessibility that create the enormous benefits of the computer age, if not properly controlled, may allow individuals and organizations to inexpensively eavesdrop on or interfere with these operations from remote locations for mischievous or malicious purposes, including fraud or sabotage. Table 1 summarizes the key threats to our nation’s infrastructures, as observed by the Federal Bureau of Investigation (FBI). Government officials remain concerned about attacks from individuals and groups with malicious intent, such as crime, terrorism, foreign intelligence gathering, and acts of war. According to the FBI, terrorists, transnational criminals, and intelligence services are quickly becoming aware of and using information exploitation tools such as computer viruses, Trojan horses, worms, logic bombs, and eavesdropping sniffers that can destroy, intercept, degrade the integrity of, or deny access to data. In addition, the disgruntled organization insider is a significant threat, since these individuals often have knowledge that allows them to gain unrestricted access and inflict damage or steal assets without possessing a great deal of knowledge about computer intrusions. As greater amounts of money and more sensitive economic and commercial information are exchanged electronically, and as the nation’s defense and intelligence communities increasingly rely on standardized information technology (IT), the likelihood increases that information attacks will threaten vital national interests. As the number of individuals with computer skills has increased, more intrusion or “hacking” tools have become readily available and relatively easy to use. A hacker can literally download tools from the Internet and “point and click” to start an attack. Experts agree that there has been a steady advance in the sophistication and effectiveness of attack technology. Intruders quickly develop attacks to exploit vulnerabilities discovered in products, use these attacks to compromise computers, and share them with other attackers. In addition, they can combine these attacks with other forms of technology to develop programs that automatically scan the network for vulnerable systems, attack them, compromise them, and use them to spread the attack even further. Between 1995 and the first half of 2003, the CERT Coordination Center(CERT/CC) reported 11,155 security vulnerabilities that resulted from software flaws. Figure 1 illustrates the dramatic growth in security vulnerabilities over these years. The growing number of known vulnerabilities increases the number of potential attacks created by the hacker community. Attacks can be launched against specific targets or widely distributed through viruses and worms. Along with these increasing threats, the number of computer security incidents reported to the CERT/CC has also risen dramatically—from 9,859 in 1999 to 82,094 in 2002 and 76,404 for just the first half of 2003. And these are only the reported attacks. The Director of CERT Centers stated that he estimates that as much as 80 percent of actual security incidents goes unreported, in most cases because (1) the organization was unable to recognize that its systems had been penetrated or there were no indications of penetration or attack or (2) the organization was reluctant to report. Figure 2 shows the number of incidents that were reported to the CERT/CC from 1995 through the first half of 2003. According to the National Security Agency, foreign governments already have or are developing computer attack capabilities, and potential adversaries are developing a body of knowledge about U.S. systems and about methods to attack these systems. The National Infrastructure Protection Center (NIPC) reported in January 2002 that a computer belonging to an individual with indirect links to Osama bin Laden contained computer programs that suggested that the individual was interested in structural engineering as it related to dams and other water- retaining structures. The NIPC report also stated that U.S. law enforcement and intelligence agencies had received indications that Al Qaeda members had sought information about control systems from multiple Web sites, specifically on water supply and wastewater management practices in the United States and abroad. Since the terrorist attacks of September 11, 2001, warnings of the potential for terrorist cyber attacks against our critical infrastructures have also increased. For example, in his February 2002 statement for the Senate Select Committee on Intelligence, the director of central intelligence discussed the possibility of cyber warfare attack by terrorists. He stated that the September 11 attacks demonstrated the nation’s dependence on critical infrastructure systems that rely on electronic and computer networks. Further, he noted that attacks of this nature would become an increasingly viable option for terrorists as they and other foreign adversaries become more familiar with these targets and the technologies required to attack them. What are control systems? Control systems are computer-based systems that are used by many infrastructures and industries to monitor and control sensitive processes and physical functions. Typically, control systems collect sensor measurements and operational data from the field, process and display this information, and relay control commands to local or remote equipment. In the electric power industry they can manage and control the transmission and delivery of electric power, for example, by opening and closing circuit breakers and setting thresholds for preventive shutdowns. Employing integrated control systems, the oil and gas industry can control the refining operations on a plant site as well as remotely monitor the pressure and flow of gas pipelines and control the flow and pathways of gas transmission. In water utilities, they can remotely monitor well levels and control the wells’ pumps; monitor flows, tank levels, or pressure in storage tanks; monitor water quality characteristics, such as pH, turbidity, and chlorine residual; and control the addition of chemicals. Control system functions vary from simple to complex; they can be used to simply monitor processes—for example, the environmental conditions in a small office building—or manage most activities in a municipal water system or even a nuclear power plant. In certain industries such as chemical and power generation, safety systems are typically implemented to mitigate a disastrous event if control and other systems fail. In addition, to guard against both physical attack and system failure, organizations may establish back-up control centers that include uninterruptible power supplies and backup generators. There are two primary types of control systems. Distributed Control Systems (DCS) typically are used within a single processing or generating plant or over a small geographic area. Supervisory Control and Data Acquisition (SCADA) systems typically are used for large, geographically dispersed distribution operations. A utility company may use a DCS to generate power and a SCADA system to distribute it. Figure 3 illustrates the typical components of a control system. A control system typically consists of a “master” or central supervisory control and monitoring station consisting of one or more human-machine interfaces where an operator can view status information about the remote sites and issue commands directly to the system. Typically, this station is located at a main site along with application servers and an engineering workstation that is used to configure and troubleshoot the other control system components. The supervisory control and monitoring station is typically connected to local controller stations through a hard- wired network or to remote controller stations through a communications network—which could be the Internet, a public switched telephone network, or a cable or wireless (e.g. radio, microwave, or Wi-Fi) network. Each controller station has a Remote Terminal Unit (RTU), a Programmable Logic Controller (PLC), DCS controller, or other controller that communicates with the supervisory control and monitoring station. The controller stations also include sensors and control equipment that connect directly with the working components of the infrastructure—for example, pipelines, water towers, and power lines. The sensor takes readings from the infrastructure equipment—such as water or pressure levels, electrical voltage or current—and sends a message to the controller. The controller may be programmed to determine a course of action and send a message to the control equipment instructing it what to do—for example, to turn off a valve or dispense a chemical. If the controller is not programmed to determine a course of action, the controller communicates with the supervisory control and monitoring station before sending a command back to the control equipment. The control system also can be programmed to issue alarms back to the operator when certain conditions are detected. Handheld devices, such as personal digital assistants, can be used to locally monitor controller stations. Experts report that technologies in controller stations are becoming more intelligent and automated and communicate with the supervisory central monitoring and control station less frequently, requiring less human intervention. Historically, security concerns about control systems were related primarily to protecting against physical attack and misuse of refining and processing sites or distribution and holding facilities. However, more recently, there has been a growing recognition that control systems are now vulnerable to cyber attacks from numerous sources, including hostile governments, terrorist groups, disgruntled employees, and other malicious intruders. In October 1997, the President’s Commission on Critical Infrastructure Protection specifically discussed the potential damaging effects on the electric power and oil and gas industries of successful attacks on control systems. Moreover, in 2002, the National Research Council identified “the potential for attack on control systems” as requiring “urgent attention.” In February 2003, the President clearly demonstrated concern about “the threat of organized cyber attacks capable of causing debilitating disruption to our Nation’s critical infrastructures, economy, or national security,” noting that “disruption of these systems can have significant consequences for public health and safety” and emphasizing that the protection of control systems has become “a national priority.” Several factors have contributed to the escalation of risk to control systems, including (1) the adoption of standardized technologies with known vulnerabilities, (2) the connectivity of control systems to other networks, (3) constraints on the implementation of existing security technologies and practices, (4) insecure remote connections, and (5) the widespread availability of technical information about control systems. Historically, proprietary hardware, software, and network protocols made it difficult to understand how control systems operated—and therefore how to hack into them. Today, however, to reduce costs and improve performance, organizations have been transitioning from proprietary systems to less expensive, standardized technologies such as Microsoft’s Windows and Unix-like operating systems and the common networking protocols used by the Internet. These widely used standardized technologies have commonly known vulnerabilities, and sophisticated and effective exploitation tools are widely available and relatively easy to use. As a consequence, both the number of people with the knowledge to wage attacks and the number of systems subject to attack have increased. Also, common communication protocols and the emerging use of Extensible Markup Language (commonly referred to as XML) can make it easier for a hacker to interpret the content of communications among the components of a control system. Enterprises often integrate their control systems with their enterprise networks. This increased connectivity has significant advantages, including providing decision makers with access to real-time information and allowing engineers to monitor and control the process control system from different points on the enterprise network. In addition, the enterprise networks are often connected to the networks of strategic partners and to the Internet. Furthermore, control systems are increasingly using wide area networks and the Internet to transmit data to their remote or local stations and individual devices. This convergence of control networks with public and enterprise networks potentially exposes the control systems to additional security vulnerabilities. Unless appropriate security controls are deployed in the enterprise network and the control system network, breaches in enterprise security can affect the operation of control systems. According to industry experts, the use of existing security technologies, as well as strong user authentication and patch management practices, are generally not implemented in control systems because control systems operate in real time, typically are not designed with cybersecurity in mind, and usually have limited processing capabilities. Existing security technologies such as authorization, authentication, encryption, intrusion detection, and filtering of network traffic and communications require more bandwidth, processing power, and memory than control system components typically have. Because controller stations are generally designed to do specific tasks, they use low-cost, resource-constrained microprocessors. In fact, some devices in the electrical industry still use the Intel 8088 processor, introduced in 1978. Consequently, it is difficult to install existing security technologies without seriously degrading the performance of the control system. Further, complex passwords and other strong password practices are not always used to prevent unauthorized access to control systems, in part because this could hinder a rapid response to safety procedures during an emergency. As a result, according to experts, weak passwords that are easy to guess, shared, and infrequently changed are reportedly common in control systems, including the use of default passwords or even no password at all. In addition, although modern control systems are based on standard operating systems, they are typically customized to support control system applications. Consequently, vendor-provided software patches are generally either incompatible or cannot be implemented without compromising service by shutting down “always-on” systems or affecting interdependent operations. Potential vulnerabilities in control systems are exacerbated by insecure connections. Organizations often leave access links—such as dial-up modems to equipment and control information—open for remote diagnostics, maintenance, and examination of system status. Such links may not be protected with authentication or encryption, which increases the risk that hackers could use these insecure connections to break into remotely controlled systems. Also, control systems often use wireless communications systems, which are especially vulnerable to attack, or leased lines that pass through commercial telecommunications facilities. Without encryption to protect data as it flows through these insecure connections or authentication mechanisms to limit access, there is limited protection for the integrity of the information being transmitted. Public information about infrastructures and control systems is available to potential hackers and intruders. The availability of this infrastructure and vulnerability data was demonstrated earlier this year by a George Mason University graduate student, whose dissertation reportedly mapped every business and industrial sector in the American economy to the fiber- optic network that connects them—using material that was available publicly on the Internet, none of which was classified. Many of the electric utility officials who were interviewed for the National Security Telecommunications Advisory Committee’s Information Assurance Task Force’s Electric Power Risk Assessment expressed concern over the amount of information about their infrastructure that is readily available to the public. In the electric power industry, open sources of information—such as product data and educational videotapes from engineering associations— can be used to understand the basics of the electrical grid. Other publicly available information—including filings of the Federal Energy Regulatory Commission (FERC), industry publications, maps, and material available on the Internet—is sufficient to allow someone to identify the most heavily loaded transmission lines and the most critical substations in the power grid. In addition, significant information on control systems is publicly available—including design and maintenance documents, technical standards for the interconnection of control systems and RTUs, and standards for communication among control devices—all of which could assist hackers in understanding the systems and how to attack them. Moreover, there are numerous former employees, vendors, support contractors, and other end users of the same equipment worldwide with inside knowledge of the operation of control systems. There is a general consensus—and increasing concern—among government officials and experts on control systems about potential cyber threats to the control systems that govern our critical infrastructures. As components of control systems increasingly make critical decisions that were once made by humans, the potential effect of a cyber threat becomes more devastating. Such cyber threats could come from numerous sources, ranging from hostile governments and terrorist groups to disgruntled employees and other malicious intruders. Based on interviews and discussions with representatives throughout the electric power industry, the Information Assurance Task Force of the National Security Telecommunications Advisory Committee concluded that an organization with sufficient resources, such as a foreign intelligence service or a well- supported terrorist group, could conduct a structured attack on the electric power grid electronically, with a high degree of anonymity and without having to set foot in the target nation. In July 2002, NIPC reported that the potential for compound cyber and physical attacks, referred to as “swarming attacks,” is an emerging threat to the U.S. critical infrastructure. As NIPC reports, the effects of a swarming attack include slowing or complicating the response to a physical attack. For instance, a cyber attack that disabled the water supply or the electrical system in conjunction with a physical attack could deny emergency services the necessary resources to manage the consequences—such as controlling fires, coordinating actions, and generating light. According to the National Institute of Standards and Technology, cyber attacks on energy production and distribution systems—including electric, oil, gas, and water treatment, as well as on chemical plants containing potentially hazardous substances—could endanger public health and safety, damage the environment, and have serious financial implications, such as loss of production, generation, or distribution of public utilities; compromise of proprietary information; or liability issues. When backups for damaged components are not readily available (e.g., extra-high-voltage transformers for the electric power grid), such damage could have a long- lasting effect. Although experts in control systems report that they have substantiated reports of numerous incidents affecting control systems, there is no formalized process to collect and analyze information about control systems incidents. CERT/CC and KEMA, Inc. have proposed establishing a center that will proactively interact with industry to collect information about potential cyber incidents, analyze them, assess their potential impact, and make the results available to industry. I will now discuss potential and reported cyber attacks on control systems. Entities or individuals with malicious intent might take one or more of the following actions to successfully attack control systems: disrupt the operation of control systems by delaying or blocking the flow of information through control networks, thereby denying availability of the networks to control system operators; make unauthorized changes to programmed instructions in PLCs, RTUs, or DCS controllers, change alarm thresholds, or issue unauthorized commands to control equipment, which could potentially result in damage to equipment (if tolerances are exceeded), premature shutdown of processes (such as prematurely shutting down transmission lines), or even disabling of control equipment; send false information to control system operators either to disguise unauthorized changes or to initiate inappropriate actions by system operators; modify the control system software, producing unpredictable results; interfere with the operation of safety systems. In addition, in control systems that cover a wide geographic area, the remote sites are often unstaffed and may not be physically monitored. If such remote systems are physically breached, the attackers could establish a cyber connection to the control network. Department of Energy and industry researchers have speculated on how the following potential attack scenario could affect control systems in the electricity sector. Using war dialers to find modem phone lines that connect to the programmable circuit breakers of the electric power control system, hackers could crack passwords that control access to the circuit breakers and could change the control settings to cause local power outages and even damage equipment. A hacker could lower settings from, for example, 500 amperes to 200 on some circuit breakers; normal power usage would activate, or “trip,” the circuit breakers, taking those lines out of service and diverting power to neighboring lines. If, at the same time, the hacker raised the settings on these neighboring lines to 900 amperes, circuit breakers would fail to trip at these high settings and the diverted power would overload the lines and cause significant damage to transformers and other critical equipment. The damaged equipment would require major repairs that could result in lengthy outages. Additionally, control system researchers at the Department of Energy’s national laboratories have developed systems that demonstrate the feasibility of a cyber attack on a control system at an electric power substation, where high-voltage electricity is transformed for local use. Using tools that are readily available on the Internet, they are able to modify output data from field sensors and take control of the PLC directly in order to change settings and create new output. These techniques could enable a hacker to cause an outage, thus incapacitating the substation. The consequences of these threats could be lessened by the successful operation of any safety systems, which I discussed earlier in my testimony. There have been a number of reported exploits of control systems, including the following: In 1998, during the two-week military exercise known as Eligible Receiver, staff from the National Security Agency used widely available tools to simulate how sections of the U.S. electric power grid’s control network could be disabled through cyber attack. In the spring of 2000, a former employee of an Australian company that develops manufacturing software applied for a job with the local government, but was rejected. The disgruntled former employee reportedly used a radio transmitter on numerous occasions to remotely hack into the controls of a sewage treatment system and ultimately release about 264,000 gallons of raw sewage into nearby rivers and parks. In August 2003, the Nuclear Regulatory Commission confirmed that in January 2003, the Microsoft SQL Server worm—otherwise known as Slammer—infected a private computer network at the Davis-Besse nuclear power plant in Oak Harbor, Ohio, disabling a safety monitoring system for nearly 5 hours. In addition, the plant’s process computer failed, and it took about 6 hours for it to become available again. Slammer reportedly also affected communications on the control networks of other electricity sector organizations by propagating so quickly that control system traffic was blocked. Media reports have also indicated that the Blaster worm, which broke out three days before the August blackout, might have exacerbated the problems that contributed to the cascading effect of the blackout by blocking communications on computers that are used to monitor the power grid. FirstEnergy Corp., the Ohio utility that is the chief focus of the blackout investigation, is reportedly exploring whether Blaster might have caused the computer trouble that was described on telephone transcripts as hampering its response to multiple line failures. Several challenges must be addressed to effectively secure control systems against cyber threats. These challenges include: (1) the limitations of current security technologies in securing control systems; (2) the perception that securing control systems may not be economically justifiable; and (3) the conflicting priorities within organizations regarding the security of control systems. A significant challenge in effectively securing control systems is the lack of specialized security technologies for these systems. As I previously mentioned, the computing resources in control systems that are needed to perform security functions tend to be quite limited, making it very difficult to use security technologies within control system networks without severely hindering performance. Although technologies such as robust firewalls and strong authentication can be employed to better segment control systems from enterprise networks, research and development could help address the application of security technologies to the control systems themselves. Information security organizations have noted that a gap exists between current security technologies and the need for additional research and development to secure control systems. Research and development in a wide range of areas could lead to more effective technologies to secure control systems. Areas that have been noted for possible research and development include identifying the types of security technologies needed for different control system applications, determining acceptable performance trade-offs, and recognizing attack patterns for intrusion-detection systems. Experts and industry representatives have indicated that organizations may be reluctant to spend more money to secure control systems. Hardening the security of control systems would require industries to expend more resources, including acquiring more personnel, providing training for personnel, and potentially prematurely replacing current systems that typically have a lifespan of about 20 years. Several vendors suggested that since there has been no confirmed serious cyber attack on U.S. control systems, industry representatives believe the threat of such an attack is low. Until industry users of control systems have a business case to justify why additional security is needed, there may be little market incentive for vendors to fund research to develop more secure control systems. Finally, several experts and industry representatives indicated that the responsibility for securing control systems typically includes two separate groups: IT security personnel and control system engineers and operators. IT security personnel tend to focus on securing enterprise systems, while control system engineers and operators tend to be more concerned with the reliable performance of their control systems. Further, they indicate that, as a result, those two groups do not always fully understand each other’s requirements and collaborate to implement secure control systems. These conflicting priorities may perpetuate a lack of awareness of IT security strategies that could be deployed to mitigate the vulnerabilities of control systems without affecting their performance. Although research and development will be necessary to develop technologies to secure individual control system devices, IT security technologies are currently available that could be implemented as part of a secure enterprise architecture to protect the perimeter of, and access to, control system networks. These technologies include firewalls, intrusion-detection systems, encryption, authentication, and authorization. Officials from one company indicated that, to reduce its control system vulnerabilities, it formed a team composed of IT staff, process control engineers, and manufacturing employees. This team worked collaboratively to research vulnerabilities and test fixes and workarounds. Several steps can be considered when addressing potential threats to control systems, including: Researching and developing new security technologies to protect control systems. Developing security policies, guidance, and standards for control system security. For example, the use of consensus standards could be considered to encourage industry to invest in stronger security for control systems. Increasing security awareness and sharing information about implementing more secure architectures and existing security technologies. For example, a more secure architecture might be attained by segmenting control networks with robust firewalls and strong authentication. Also, organizations may benefit from educating management about the cybersecurity risks related to control systems and sharing successful practices related to working across organizational boundaries. Implementing effective security management programs that include consideration of control system security. We have previously reported on the security management practices of leading organizations. Such programs typically consider risk assessment, development of appropriate policies and procedures, employee awareness, and regular security monitoring. Developing and testing continuity plans within organizations and industries, to ensure safe and continued operation in the event of an interruption, such as a power outage or cyber attack on control systems. Elements of continuity planning typically include (1) assessing the criticality of operations and identifying supporting resources, (2) taking steps to prevent and minimize potential damage and interruption, (3) developing and documenting a comprehensive continuity plan, and (4) periodically testing the continuity plan and making appropriate adjustments. Such plans are particularly important for control systems, where personnel may have lost familiarity with how to operate systems and processes without the use of control systems. In addition, earlier this year we reviewed the federal government’s critical infrastructure protection efforts related to selected industry sectors, including electricity and oil and gas. We recommended that the federal government assess the need for grants, tax incentives, regulation, or other public policy tools to encourage increased critical infrastructure protection activities by the private sector and greater sharing of intelligence and incident information among these industry sectors and the federal government. In addition, we have made other recommendations related to critical infrastructure protection, including: developing a comprehensive and coordinated plan for national critical infrastructure protection; improving information sharing on threats and vulnerabilities between the private sector and the federal government, as well as within the government itself; and improving analysis and warning capabilities for both cyber and physical threats. Although improvements have been made, further efforts are needed to address these challenges in implementing critical infrastructure protection. Government and private industry have taken a broad look at the cybersecurity requirements of control systems and have initiated several efforts to address the technical, economic, and cultural challenges that must be addressed. These cybersecurity initiatives include efforts to promote research and development activities; develop process control security policies, guidance, and standards; and encourage security awareness and information sharing. For example, several of the Department of Energy’s national laboratories have established or plan to establish test beds for control systems, the government and private sector are collaborating on efforts to develop industry standards, and Information Sharing and Analysis Centers such as the Chemical Sector Cybersecurity Program (for the chemical sector) and the North American Electric Reliability Council (for the electricity sector) have been developed to coordinate communication between industries and the federal government. Attachment I describes selected current and planned initiatives in greater detail. In summary, it is clear that the systems that monitor and control the sensitive processes and physical functions of the nation’s infrastructures are at increasing risk to threats of cyber attacks. Securing these systems poses significant challenges. Both government and industry can help to address these challenges by lending support to ongoing initiatives as well as taking additional steps to overcome barriers that hinder better security. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the Subcommittee may have at this time. Should you have any further questions about this testimony, please contact me at (202) 512-3317 or at [email protected]. Individuals making key contributions to this testimony included Shannin Addison, Joanne Fiorino, Alison Jacobs, Elizabeth Johnston, Steven Law, David Noone, and Tracy Pierson. provide additional security options to protect control systems. Several federally funded entities have ongoing efforts to research, develop, and test new technologies. At Sandia’s SCADA Security Development Laboratory, industry can test and improve the security of its SCADA architectures, systems, and components. Sandia also has initiatives under way to advance technologies that strengthen control systems through the use of intrusion detection, encryption/authentication, secure protocols, system and component vulnerability analysis, secure architecture design and analysis, and intelligent self-healing infrastructure technology. Plans are under way to establish the National SCADA Test Bed, which is expected to become a full-scale infrastructure testing facility that will allow for large-scale testing of SCADA systems before actual exposure to production networks and for testing of new standards and protocols before rolling them out. Los Alamos and Sandia have established a critical infrastructure modeling, simulation, and analysis center known as the National Infrastructure Simulation and Analysis Center. The center provides modeling and simulation capabilities for the analysis of critical infrastructures, including the electricity, oil, and gas sectors. The National Science Foundation is considering pursuing cybersecurity research and development options related to the security of control systems. Several efforts to develop policies, guidance, and standards to assist in securing control systems are in progress. There are coordinated efforts between government and industry to identify threats, assess infrastructure vulnerabilities, and develop guidelines and standards for mitigating risks through protective measures. Actions that have been taken so far or are under way include the following. In February 2003, the board released the National Strategy to Secure Cyberspace. The document provides a general strategic picture, specific recommendations and policies, and the rationale for these initiatives. The strategy ranks control network security as a national priority and designates the Department of Homeland Security to be responsible for developing best practices and new technologies to increase control system security. The Instrumentation, Systems, and Automation Society is composed of users, vendors, government, and academic participants representing the electric utilities, water, chemical, petrochemical, oil and gas, food and beverage, and pharmaceutical industries. It has been working on a proposed standard since October 2002. The new standard addresses the security of manufacturing and control systems. It is to provide users with the tools necessary to integrate a comprehensive security process. Two technical reports are planned for release in October 2003. One report, ISA-TR99.00.01, Security Technologies for Manufacturing and Control Systems, will describe electronic security technologies and discuss specific types of applications within each category, the vulnerabilities addressed by each type, suggestions for deployment, and known strengths and weaknesses. The other report, ISA-TR99.00.02, Integrating Electronic Security into the Manufacturing and Control Systems Environment, will provide a framework for developing an electronic security program for manufacturing and control systems, as well as a recommended organization and structure for the security plan. Sponsored by the federal government’s Technical Support Working Group, the Gas Technology Institute has researched a number of potential encryption methods to prevent hackers from accessing natural gas company control systems. This research has led to the development of an industry standard for encryption. The standard would incorporate encryption algorithms to be added to both new and existing control systems to control a wide variety of operations. This standard is outlined in the American Gas Association’s report, numbered 12-1. The National Institute of Standards and Technology and the National Security Agency have organized the Process Controls Security Requirements Forum to establish security specifications that can be used in procurement, development, and retrofit of industrial control systems. They have also developed a set of security standards and certification processes. The North American Energy Reliability Council has established a cybersecurity standard for the electricity industry. The council requires members of the electricity industry to self-certify that they are meeting the cyber-security standards. However, as currently written, the standard does not apply to control systems. The Electric Power Research Institute has developed the Utility Communications Architecture, a set of standardized guidelines that provides interconnectivity and interoperability for utility data communication systems for real-time information exchange. Many efforts are under way to spread awareness about cyber threats and control system vulnerabilities and to take proactive measures to strengthen the security of control systems. The Federal Energy Regulatory Commission, the Department of Homeland Security and other federal agencies and organizations are involved in these efforts. The Department of Homeland Security created a National Cyber Security Division to identify, analyze, and reduce cyber threats and vulnerabilities, disseminate threat warning information, coordinate incident response, and provide technical assistance in continuity of operations and recovery planning. The Critical Infrastructure Assurance Office within the Department coordinates the federal government’s initiatives on critical infrastructure assurance and promotes national outreach and awareness campaigns about critical infrastructure protection. Sandia National Laboratories has collaborated with the Environmental Protection Agency and industry groups to develop a risk assessment methodology for assessing the vulnerability of water systems in major U.S. cities. Sandia has also conducted vulnerability assessments of control systems within the electric power, oil and gas, transportation, and manufacturing industries. Sandia is involved with various activities to address the security of our critical infrastructures, including developing best practices, providing security training, demonstrating threat scenarios, and furthering standards efforts. North American Energy Reliability Council Designated by the Department of Energy as the electricity sector’s Information Sharing and Analysis Center coordinator for critical infrastructure protection, the North American Energy Reliability Council facilitates communication between the electricity sector, the federal government, and other critical infrastructure sectors. The council has formed the Critical Infrastructure Protection Advisory Group, which guides cybersecurity activities and conducts security workshops to raise awareness of cyber and physical security in the electricity sector. The council also formed a Process Controls subcommittee within the Critical Infrastructure Protection Advisory Group to specifically address control systems. The Federal Energy Regulatory Commission regulates interstate commerce in oil, natural gas, and electricity. The commission has published a rule to promote the capturing of critical energy infrastructure information, which may lead to increased information sharing between industry and the federal government. The Process Control Systems Cyber Security Forum is a joint effort between Kema Consulting and LogOn Consulting, Inc. The forum studies the cybersecurity issues surrounding the effective operation of control systems and focuses on issues, challenges, threats, vulnerabilities, best practices/lessons learned, solutions, and related topical areas for control systems. It currently holds workshops on control system cybersecurity. The Chemical Sector Cybersecurity Program is a forum of 13 trade associations and serves as the Information Sharing and Analysis Center for the chemical sector. The Chemical Industry Data Exchange is part of the Chemical Sector Cybersecurity Program and is working to establish a common security vulnerability assessment methodology and to align the chemical industry with the ongoing initiatives at the Instrumentation Systems and Automation Society, the National Institute of Standards and Technology, and the American Chemistry Council. The President’s Critical Infrastructure Protection Board and the Department of Energy developed 21 Steps to Improve the Cyber Security of SCADA Networks. These steps provide guidance for improving implementation and establishing underlying management processes and policies to help organizations improve the security of their control networks. The Joint Program Office has performed vulnerability assessments on control systems, including the areas of awareness, integration, physical testing, analytic testing, and analysis. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Computerized control systems perform vital functions across many of our nation's critical infrastructures. For example, in natural gas distribution, they can monitor and control the pressure and flow of gas through pipelines; in the electric power industry, they can monitor and control the current and voltage of electricity through relays and circuit breakers; and in water treatment facilities, they can monitor and adjust water levels, pressure, and chemicals used for purification. In October 1997, the President's Commission on Critical Infrastructure Protection emphasized the increasing vulnerability of control systems to cyber attacks. The House Committee on Government Reform, Subcommittee on Technology, Information Policy, Intergovernmental Relations, and the Census asked GAO to testify on potential cyber vulnerabilities. GAO's testimony focused on (1) significant cybersecurity risks associated with control systems; (2) potential and reported cyber attacks against these systems; (3) key challenges to securing control systems; and (4) steps that can be taken to strengthen the security of control systems, including current federal and private-sector initiatives. In addition to general cyber threats, which have been steadily increasing, several factors have contributed to the escalation of the risks of cyber attacks against control systems. These include the adoption of standardized technologies with known vulnerabilities, the increased connectivity of control systems to other systems, constraints on the use of existing security technologies for control systems, and the wealth of information about them that is publicly available. Control systems can be vulnerable to a variety of attacks, examples of which have already occurred. Successful attacks on control systems could have devastating consequences, such as endangering public health and safety; damaging the environment; or causing a loss of production, generation, or distribution of public utilities. Securing control systems poses significant challenges, including technical limitations, perceived lack of economic justification, and conflicting organizational priorities. However, several steps can be taken now and in the future to promote better security in control systems, such as implementing effective security management programs and researching and developing new technologies. The government and private industry have initiated several efforts intended to improve the security of control systems.
Under the Railroad Retirement Act of 1974, the Railroad Retirement Board operates two distinct disability programs—the occupational disability program and the total and permanent disability program. The occupational disability program provides benefits for railroad workers when they are unable to perform the duties required of them by their railroad employment. The program—which uses labor- and management- negotiated disability criteria that apply only to a worker’s’ ability to perform his or her specific railroad occupation—provides benefits for workers who have physical or mental impairments that prevent them from performing their specific job, regardless of whether they can perform other work. For example, a railroad engineer who cannot frequently climb, bend, and reach, as required by the job, may be found occupationally disabled. Workers determined to be eligible for benefits under the occupational disability program may ultimately be able return to the workforce, but generally may not return to their original occupation. According to RRB, at the end of fiscal year 2013, the agency was paying about 60,500 occupational disability annuities, down from about 61,700 in fiscal year 2012. In fiscal year 2014, the agency approved about 97 percent of the 1,250 applications for occupational disability benefits it received. The eligibility criteria for the total and permanent disability program differ from the occupational disability program. Under the total and permanent disability program, RRB makes independent determinations of railroad workers’ claimed disability using the same general criteria that the Social Security Administration (SSA) uses to administer its Disability Insurance (DI) program. For example, a worker must have a medically determinable physical or mental impairment that: (1) has lasted (or is expected to last) at least 1 year or is expected to result in death, and (2) prevents them from engaging in substantial gainful activity, defined as work activity that involves significant physical or mental activities performed for pay or profit. In other words, these workers are essentially deemed unable to perform any gainful work and are generally unable to engage in any regular employment. SSA staff review about one-third of the cases that RRB has determined to be eligible for total and permanent disability benefits for which Social Security benefits may potentially be paid. According to RRB, at the end of fiscal year 2013, the agency was paying about 20,700 total disability annuities. In fiscal year 2014, RRB approved about 78 percent of the nearly 800 applications for total and permanent disability benefits it received. While the railroad retirement system has remained separate from the Social Security system, the two systems are closely linked with regard to earnings, benefit payments, and taxes. A financial interchange links the financing of the two systems, providing a transfer of funds between RRB and SSA accounts based on the amount of Social Security benefits that workers would have received if they were covered by Social Security, as well as the payroll taxes that would have been collected if the railroad workers were covered by Social Security instead of their own system. When such benefits would exceed payroll taxes, the difference—including interest and administrative expenses—is transferred from Social Security to RRB. When such payroll taxes would exceed benefits, the transfer goes in the other direction. Since 1959, such transfers have favored RRB, and for all RRB benefits paid in fiscal year 2012, RRB received about 38 percent of the financing for benefits paid through the financial interchange. In 2009 and 2010, we reviewed the claims process for RRB’s occupational disability program and found no overall evidence of unusual claims at similar commuter railroads like those exhibited at the Long Island Railroad; however, we did identify several potential program vulnerabilities including a reliance on a manual, paper-based claims process and the lack of a systematic way to evaluate potentially fraudulent claims. Our work found that RRB had not analyzed occupational disability data or performed other analyses that could have enabled the agency to identify unusual patterns in disability applications. Claims for disability through RRB are generally filed on paper and processed in paper form, which prevents the agency from detecting potential patterns of fraud or abuse that would be possible with a computer-based system. When a railroad worker files a claim and submits information—such as details about his or her disability and work history—RRB staff create a paper claims file. These files are reviewed by claims examiners who apply eligibility criteria to determine if a benefit should be awarded. Claims are assigned to examiners randomly, and due to the manual nature of the claims process, it is difficult for individual examiners and the agency to detect potential patterns of fraud or abuse such as a high concentration of claims from one source, or boilerplate medical exam information from a small number of doctors or hospitals. Such analyses are central to ensuring the integrity of the program and—more importantly—ensuring that only eligible railroad workers receive benefits. Indeed, as was the case in the Long Island Railroad incident, the use of paper files likely played a key role in allowing these patterns to go undetected. In 2009, we analyzed data from multiple RRB data systems to determine the number of occupational disability benefit awards made, relative to employment, for the Long Island Railroad compared with the other commuter railroads and determined application and approval rates for occupational disability benefits for workers at these railroads to determine if other railroads exhibited high numbers of claims like those found at the Long Island Railroad. It is important to note that the data we used for our analyses were readily available to RRB, and the agency could have used these data to identify such patterns as part of its routine monitoring and oversight of the occupational disability program. While we found no overall evidence of unusual claims like those exhibited at the Long Island Railroad, neither we nor RRB could perform analyses to detect unusual patterns in commuter rail worker’s applications, approval rates, and impairments by railroad occupation because the information is paper- based. Further, RRB does not maintain electronic data for all railroads on claimants’ doctors in a format that would facilitate analysis and allow the agency to analyze and detect potentially fraudulent claims. Currently, RRB only has information on claimants’ doctors in their paper claim files. RRB has taken some steps to increase the use of data to detect and analyze claim patterns, but much more work needs to be done. Since the Long Island Railroad incident, RRB created a new staff position responsible for collecting, developing, and analyzing relevant data to help manage and oversee the occupational disability program. However, this office’s limited reviews have thus far focused on RRB’s occupational disability program and RRB officials told us during our 2014 review that there were no current plans to include and evaluate data from the total and permanent disability program in its analyses. Our recent work examining the processes and controls associated with the total and permanent disability program indicated that it too was vulnerable to fraud and improper payments. For example, we found fundamental shortcomings in this program’s policies and procedures with respect to the disability determination process, internal controls, performance and accountability, and fraud awareness. Outdated earnings information: Our 2014 review found that RRB awarded total and permanent disability claims based on out-dated work and earnings information. In order to qualify for total and permanent disability benefits, a worker must meet certain work and earnings eligibility criteria. For example, a worker generally cannot earn income in excess of $850 per month from employment or net self-employment. RRB requires that claimants report any income and employment information at the time a disability claim is submitted, and RRB attempts to confirm this information by comparing it to data within the SSA Master Earnings File. However, this earnings database may not provide up-to-date information on work and earnings because the most recent data contained within the database are for the last complete calendar year before the claim was filed. As a result, the data that RRB uses to determine eligibility may lag behind actual earnings by up to 12 months. Without reviewing the most up-to-date information available, RRB is unable to ensure that only eligible workers receive benefits. There are other sources of data that could potentially provide RRB more current information on work and earnings, and as a result of our review, we recommended that RRB explore options to obtain more timely earnings data to ensure that claimants are working within allowable program limits prior to being awarded benefits. Information sources such as the National Directory of New Hires (NDNH) and The Work Number could potentially provide RRB with more timely earnings information on claimants’ work histories. The NDNH was established in part to help states enforce child support orders against noncustodial parents. However, access to the NDNH is limited by statute, and RRB does not have specific legal authority to access it. The Work Number is a privately-maintained data source designed to help users identify unreported income. The Work Number allows organizations such as social service organizations to locate an individual’s current place of employment or uncover unreported income, based on the most recent payroll data from over 2,500 employers nationwide. Inquiries can be made about specific individuals or through automated data matches. The Work Number is used by several other federal agencies on a fee basis and is already available to RRB. In 2014, we recommended that RRB explore options to obtain more timely earnings data to ensure that claimants are working within allowable program limits prior to being awarded benefits. RRB officials agreed with our recommendation and have told us that they will work with the Office of Management and Budget to further define and determine RRB’s needs in this area. Insufficient supervisory review process: Our examination of RRB’s total and permanent disability claims review process uncovered gaps in internal controls such as allowing a single claims examiner to review claims and award disability benefits—in many cases without an independent review by a second party. GAO’s Standards for Internal Control in the Federal Government states that agencies should ensure that key duties and responsibilities are divided or segregated among different people to reduce the risk of error, waste, or fraud. However, we found an inconsistent review process at RRB. Specifically, at the time of our review, RRB’s policies and procedures allowed for discretion at the field office level regarding how complete a case file must be before forwarding it to headquarters for a determination, and these files were subject to different levels of review. For example, at the headquarters examination and determination level, RRB policy allowed for some claims to be approved without any subsequent independent review and generally allowed examiners to use their judgment to decide which cases did not require additional scrutiny. In other words, at their discretion, a single RRB claims examiner could “self-authorize” the claim. In recent years, about one-quarter to one-third of all total and permanent initial claims were approved by the same claims examiner who reviewed the application. Without a second review, such claims can be problematic, such as when there is an error in judgment on the part of the claims examiner, or a failure to obtain key medical and vocational evidence. As a result of our review, we recommended that RRB revise its policy to require supervisory review and approval of all total and permanent disability cases. In response, RRB has subsequently changed its policy and officials stated that nearly all claim files are now reviewed by a second party. Program quality and integrity: Our 2014 review also found an insufficient commitment to quality and program integrity. We found that RRB’s primary focus on quality was to ensure that claims were paid quickly and that the approved benefit amount was paid. However, RRB did not have sufficient controls to ensure that the claimant was actually eligible for benefits or that the benefit was awarded correctly—prior to the benefit being paid. In certain circumstances, RRB was able to identify improper payments after the benefit had already been paid, but this put RRB into a “pay and chase” mode where it must try and recover benefits paid to ineligible claimants. We agree with RRB that claims should be paid as quickly as possible; however it is equally important to ensure that the benefits are properly awarded. To ensure the integrity of the program, it is also critical that RRB report the results of its quality assurance efforts to Congress and other interested parties. RRB’s performance monitoring standards have been focused primarily on payment timeliness and accuracy and less on whether claimants were properly qualified to receive benefits. Information on approval rates and the accuracy of disability determinations is critical towards ensuring the accountability of the agency’s work. As a result, we recommended that RRB strengthen oversight of its disability determination process by establishing a regular quality assurance review of initial disability determinations to assess the quality of medical evidence, determination accuracy, and process areas in need of improvement and develop performance goals to track the accuracy of disability determinations. RRB agreed with these recommendations and plans to develop new measures of quality and program integrity and will include the development of performance goals as a part of its new quality assurance plan; however, we have yet to receive or review this plan. Fraud detection and awareness: Lastly, our review found inadequate internal controls to identify and eliminate fraud at every stage of the process and an insufficient commitment to fraud awareness throughout the agency. RRB had not engaged in a comprehensive effort to continuously identify and prevent potential fraud program- wide even after the high-profile Long Island Railroad incident exposed fraud as a key program risk. Since that incident, RRB increased its scrutiny of claims from Long Island Railroad workers—for example, by ordering more consultative medical exams. However, as noted earlier, its other actions to detect and prevent fraud have been limited and narrowly focused. For example, in 2011, RRB conducted an analysis of 89 cases of proven fraud in its occupational disability and total and permanent disability programs to identify common characteristics that could aid in indentifying at-risk cases earlier in the process. However, RRB did not draw any conclusions about new ways to identify potential fraud and, as a result, did not make any system-wide changes to the determination process. Our interviews with RRB staff also showed an inconsistent level of awareness about fraud, and claims representatives in all four of the district offices that we contacted said they had not received any training directly related to fraud awareness. While RRB had initiated fraud awareness training, agency participation was incomplete and updates and refreshers were sporadic. Due to this limited focus on fraud detection and awareness, we recommended that RRB 1) develop procedures to identify and address cases of potential fraud before claims are approved, 2) require annual training on these procedures for all agency personnel, and 3) regularly communicate management’s commitment to these procedures and to the principle that fraud awareness, identification, and prevention is the responsibility of all RRB staff. RRB agreed with this recommendation and has begun taking steps to increase fraud awareness, amend its policies and procedures with new fraud detection and reporting mechanisms, and provide fraud awareness training to its staff. RRB officials also stated that the agency has hired a contractor to review the agency’s fraud awareness and detection systems to identify specific areas in need of improvement. In summary, our recent work has found that RRB’s disability programs lack sufficient policies and procedures to address the vulnerabilities it faces and, as a result, remains vulnerable to fraud and runs the risk of making improper payments. The weaknesses we have identified in RRB’s determination process require sustained management attention and a more proactive stance by the agency. Without a commitment to fundamental aspects of internal control and program integrity, RRB remains vulnerable to fraud and runs the risk of making payments to ineligible individuals, thereby undermining the public’s confidence in these important disability programs. While the Board agreed with all of our recommendations and the agency has taken steps to address them, more work remains to be done. We look forward to working with members of the subcommittee, RRB officials, and Inspector General staff as RRB continues to implement our recommendations. Chairman Meadows, Ranking Member Connolly, and members of the subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact Daniel Bertoni, Director, Education, Workforce, and Income Security issues at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff members who made key contributions to this testimony are David Lehrer (Assistant Director), Jessica Botsford, Alex Galuten, Jamila Kennedy, Jean McSween, Arthur Merriam, and Kate van Gelder. GAO, Standards for Internal Control in the Federal Government, GAO/AIMD-00-21.3.1 (Washington, D.C.: November 1, 1999). Railroad Retirement Board: Total and Permanent Disability Program at Risk of Improper Payments, GAO-14-418. Washington, D.C.: June 26, 2014. Use of the Railroad Retirement Board Occupational Disability Program across the Rail Industry, GAO-10-351R. Washington, D.C.: February 4, 2010. Railroad Retirement Board: Review of Commuter Railroad Occupational Disability Claims Reveals Potential Program Vulnerabilities, GAO-09-821R. Washington, D.C.: September 9, 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Over time, GAO, the RRB Inspector General, and the U.S. Department of Justice have reviewed or investigated RRB's disability benefit programs and found them to be vulnerable to fraud and abuse which places the agency at risk of making improper payments. In 2008, the Department of Justice investigated and prosecuted railroad workers who were suspected of falsely claiming RRB benefits. As of September 30, 2014, these investigations and prosecutions have resulted in approximately $614 million in restitution, forfeiture, and fines, raising concerns about RRB's administration of its disability claims process. Implementing strong preventive controls can serve as a frontline defense against improper payments. Examples of preventive controls include 1) ensuring that key duties and responsibilities are divided or segregated among different people to reduce the risk of error, waste or fraud and 2) using timely earnings information to ensure claimants are eligible to receive program benefits. GAO did not make recommendations regarding the occupational disability program, and in 2014, made five recommendations regarding the total and permanent disability program. This testimony provides information on (1) the critical program vulnerabilities of RRB's occupational disability program, and (2) the potential for fraud and threat of improper payments in RRB's total and permanent disability program. GAO is not making any new recommendations in this testimony. The Railroad Retirement Board (RRB) administers two disability programs—the occupational disability program and the total and permanent disability program. The occupational disability program provides benefits to railroad workers in situations where workers are unable to perform their railroad work, but may be able to return to the workforce in another occupation. The total and permanent disability program provides benefits to workers who have a medically determinable physical or mental impairment severe enough that they are generally unable to engage in any regular employment. As a steward of taxpayer dollars, the RRB is responsible for how it disperses billions of taxpayer dollars each year. In recent years, the RRB has been the subject of Government Accountability Office (GAO) audits that have highlighted shortcomings in RRB's administration of its disability programs. RRB Inspector General audits and a U.S. Department of Justice investigation have found similar challenges. GAO found that RRB's continued reliance on a paper-based process and the agency's lack of a robust analytical framework to target potential fraud and abuse in the occupational disability program left the agency susceptible to making improper payments to individuals who did not qualify for benefits. For example, individual occupational disability claims were kept in paper-based files making it difficult for claims examiners to identify unusual patterns or instances where medical information may originate from a small number of doctors or hospitals. Similarly, RRB did not maintain information on doctors in a format that would allow the agency to detect and analyze potential instances of fraud. RRB had begun separately collecting data to detect unusual patterns in relation to a high-profile fraud incident involving employees of the Long Island Railroad, but had not expanded these analyses to other railroads or to other programs outside the occupational disability program. GAO also found last year that RRB's total and permanent disability program was vulnerable to fraud and improper payments. A shortage of timely data, gaps in internal controls, a lack of a comprehensive system of quality assurance and performance monitoring, and insufficient focus on potential fraud all contributed to a need for fundamental program reform. For example, GAO found that RRB was using information to verify a claimant's self-reported work and earnings history that was up to 1 year old when newer data were available. Further, RRB's claims review process did not follow accepted internal controls by sufficiently separating claim reviews from approvals and, as a result, from one-quarter to one-third of total and permanent disability cases were approved without independent review by a second party. In addition, RRB's performance monitoring standards were focused primarily on payment timeliness and accuracy and less on whether claimants were properly qualified to receive benefits. Lastly, RRB's process lacked a fundamental awareness and sensitivity to instances of potential fraud. In a recent report examining the total and permanent disability program, GAO made several recommendations to improve the oversight of this program including ways to improve information, increase internal controls and foster fraud awareness. RRB officials agreed with all of GAO's recommendations and the agency has begun taking steps to implement them.
As the federal government’s principal real estate agent, the General Services Administration (GSA) controls the largest office space portfolio in the United States. More than 1 million federal employees work in 276 million square feet of space that GSA controls in about 7,800 buildings nationwide. GSA has a virtual monopoly over the federal government’s acquisition and management of general purpose office space that is owned or leased to support federal agencies’ missions. Our earlier work has shown that federal agencies have generally long been dissatisfied with GSA’s monopoly as well as the quality, condition, and costs of their office space and the amount of time GSA takes to deliver it. Our key reports and testimonies over the past 5 years on GSA’s monopoly and various public buildings issues are identified at the end of this report. Once federal agencies report their office space requirements to GSA, it decides whether those requirements will be met through government owned or leased space. Of the 276 million square feet of space nationwide that GSA controls, almost one-half—133 million square feet—in over 6,000 buildings is leased. The rest—143 million square feet—is in about 1,700 federally owned buildings. In recent years, GSA has become increasingly dependent on leased office space. Between 1975 and 1994, the amount of space GSA leased increased by 37 percent, and the overall ratio of leased to owned space rose from 40 percent to 48 percent. In fiscal year 1994, GSA expected to pay $2.1 billion for leased space, and these costs represented almost 30 percent of its total estimated $7.3 billion public buildings budget. GSA projects that the costs of leased space will rise to $3 billion annually by 2002 unless the ratio of federally owned to leased space is increased. GSA’s costs of providing office space and related mission-support services to federal agencies, in federally owned as well as leased buildings, are financed by the Federal Buildings Fund (FBF). GSA charges federal agencies rent for the space they occupy, which is supposed to be comparable to local commercial rents; deposits these rent receipts in the FBF; and uses them, subject to congressional limitations in annual appropriation acts, to pay building capital and operating expenses, including the costs of leased space. GSA’s lease acquisition process involves five major phases: (1) refining agencies’ identified office space size, configuration, and location requirements; (2) preparing a solicitation for offers detailing the government’s space requirements, describing the award criteria to be used, and soliciting offers from prospective landlords; (3) analyzing landlords’ offers in accordance with the specified award criteria and selecting the winning landlord; (4) preparing, reviewing, and approving the formal lease agreement; and (5) preparing space layouts and architectural plans and customizing the space to meet the federal tenant agency’s specific needs. In leasing office space, GSA is to follow procedures prescribed in the General Services Acquisition Regulation (GSAR). GSA’s procedures apply many of the procurement principles in the Federal Acquisition Regulation (FAR), the primary federal procurement regulation governing the acquisition of supplies and services, to its leasing process. GSA also incorporated into GSAR, requirements contained in the Competition in Contracting Act of 1984 (CICA) that seek to achieve full and open competition for federal contracts. In addition, GSA’s leasing process is used to further national policies and to enforce various federal socioeconomic mandates. For example, GSA’s lease award criteria incorporate Executive Order 12072, which promotes the economic development of the central business districts of cities, and various Equal Employment Opportunity requirements. Finally, GSA’s leasing procedures incorporate the principles of various other procurement laws, executive orders, decisions of the federal courts and the Boards of Contract Appeals, and the regulations of various agencies, such as the Environmental Protection Agency or the Architectural and Transportation Barriers Compliance Board. Congress created GSA in 1949 to centralize, in a single agency, responsibilities for the housekeeping functions of the executive branch—procurement, management of real and personal property, records management, etc. The Federal Property and Administrative Services Act of 1949 gave the Administrator of GSA broad authority over the management of real property, including authority to (1) prescribe regulations governing real property management and leasing, (2) lease real property, and (3) delegate lease authority back to the head of any federal agency. As emphasized in our December 1992 Transition Report on General Services Issues (GAO/OCG-93-28TR), GSA, since its establishment in 1949, has been torn between (1) an internal dynamic that emphasizes a centralized approach to the direct provision and operation of office space and other support services to federal client agencies and (2) a largely external expectation that its primary role should be to set governmentwide policy, provide effective and comprehensive oversight of decentralized operations within the departments and agencies, and directly operate activities only where it makes sense and is cost effective to have a central agency involved. The latter view is generally supported by the agencies, the Office of Management and Budget (OMB), the Vice President’s 1993 National Performance Review (NPR), and by us. Over the years, a shift away from direct delivery of services has resulted in a sharp reduction in GSA’s employment levels. GSA’s Public Buildings Service decreased from over 18,000 employees in 1978 to about 9,000 employees in 1994. Historically, GSA generally has been unwilling to delegate to other agencies its authority to lease general purpose office space within urban areas and has opposed agencies’ efforts to obtain independent public buildings authority. However, GSA has delegated day-to-day buildings management and lease administration responsibilities to federal agencies for about 2,000 of its 7,800 buildings. These agencies are now handling (or contracting) functions previously handled by GSA. However, GSA is responsible for providing governmentwide guidance and overseeing these functions. Over the years, we have generally supported decentralized real property operations, GSA’s delegations of authority to tenant agencies, and taken the position that GSA should make greater use of delegated authority. To date, GSA has delegated lease acquisition authority to some federal agencies. For the most part, however, these delegations are for special-purpose space, such as military recruiting offices, medical clinics or treatment centers, and storage facilities or for general purpose office space in locations outside major urban areas or areas where GSA controls less than 250,000 square feet of space. Several federal agencies, boards, and commissions have independent statutory leasing authority. Most of the agencies having such authority are self-supporting, and their activities are not financed by congressional appropriations. In many cases, this statutory leasing authority is only for specific geographic areas or special-purpose space. However, some agencies, such as the Securities and Exchange Commission (SEC) have broad statutory leasing authority. SEC received its statutory authority in 1990. At the request of the Chairman, Senate Committee on Governmental Affairs, we reported in November 1992 on SEC’s independent statutory leasing authority. Given the small number of SEC leases and the difficulty of finding comparable GSA leases, we were unable to determine conclusively whether SEC’s lease rates were higher or lower than GSA’s. Due primarily to GSA’s long-standing monopoly and historical focus on day-to-day real property operations at the expense of needed governmentwide leadership and oversight, NPR concluded, as we did in our December 1992 Transition Report on General Services Issues, that GSA’s long-standing methods of doing business should be replaced with new methods that are based on entrepreneurial and competitive principles. NPR recommended (1) ending GSA’s office space monopoly; (2) allowing federal agencies the choice of obtaining office space and related mission-support services from GSA, other federal entities, or the private sector; and (3) changing the way GSA does business. Concerning office space leasing, NPR recommended simplifying the procedures for acquiring leased office space of less than 10,000 square feet and renewing existing leases. NPR’s September 7, 1993, report also concluded that the overall federal procurement process had become “too complex, absurdly slow, and frequently ineffective” and that “elaborate safeguards often cost more money than they save.” According to NPR, federal procurement needs to be reshaped by decentralizing authority to line managers letting them buy much of what they need, simplifying procurement regulations and processes, and empowering the system’s customers by ending most government service monopolies, including those of GSA. Besides recommending the revision of federal procurement regulations, NPR made several other recommendations aimed at reforming federal procurement policies, procedures, and practices. After we completed our work and prepared a draft of this report, Congress enacted procurement reform legislation. The Federal Acquisition Streamlining Act of 1994 (P.L. 103-355), enacted on October 13, 1994, seeks to enhance the federal acquisition process through certain streamlining improvements and a wide-ranging set of performance-based management goals and incentives. The act’s leasing provisions are highlighted at the end of chapter 4. Expressing concern about escalating federal lease costs and the continued efficacy of GSA’s leasing process, the former Chairman of the Subcommittee on Water Resources, Transportation, Public Buildings and Economic Development, Senate Committee on Environment and Public Works, asked us to examine the efficiency and effectiveness of GSA’s policies, procedures, and practices for leasing office space and how they compare with those of private industry. To respond to these concerns, we identified and examined GSA’s leasing policies, procedures, and practices. We discussed their efficiency and effectiveness with responsible GSA headquarters and regional management officials and realty specialists and their legal basis with representatives of GSA’s Office of General Counsel. Similarly, we reviewed and discussed with these GSA officials the federal laws, procurement regulations, and other national policies that guide GSA’s leasing activities. These included FAR, GSAR, CICA, and various other laws, policy directives, and legal decisions. We also reviewed (1) earlier GAO, GSA Inspector General, and GSA internal reviews and studies of GSA’s leasing process, policies, and practices; (2) available GSA data and statistics on its overall leasing performance and its delegations of lease acquisition authority to federal customer agencies; (3) the results of recent surveys of federal customer agencies’ satisfaction with GSA’s services; and (4) the findings, conclusions, and recommendations of NPR dealing with the federal procurement process and GSA’s leasing and other real property activities. We documented and flowcharted GSA’s leasing process; examined GSA’s rationale for each of its major leasing steps and requirements; and made comparative evaluations of GSA and private industry leasing policies, procedures, and performance. We judgmentally selected and reviewed a sample of 34 leases GSA awarded between 1988 and 1992—13 leases in San Francisco, CA; 12 leases in New York, NY; and 9 leases in Dallas, TX. These 34 leases, identified in appendix III, represented all the leases GSA awarded in the central business districts of these 3 federal regional cities during this period and included leases of varying sizes ranging from 540 square feet to 463,399 square feet. Our review of these leases focused on GSA’s timeliness in meeting federal agencies’ office space needs, the nature and degree of competition for GSA’s leases, how GSA’s procedures affected the level of competition, and how GSA’s lease rates compared to similar private sector leases in the same building or geographic area that GSA had identified for comparative purposes. Because of the relatively small number of leases reviewed and variations in the different real estate markets involved, the results of our sample analyses are not projectable to GSA’s nationwide leasing activities. As part of our review of these 34 GSA leases, we attempted to contact the 167 commercial landlords or real estate brokers that GSA had solicited for offers on these leases. We successfully contacted 82 of these landlords or brokers and obtained their views on GSA’s leasing process, practices, and performance; how they compare with private industry leasing practices; and how they could be improved. In 2 of the 3 federal regional cities where we did our fieldwork—San Francisco and Dallas/Fort Worth—we judgmentally selected and interviewed the real estate managers of 12 major private sector firms with large portfolios of leased office space to discuss their leasing approach and identify their leasing procedures and practices. At their request, we agreed not to identify the 12 firms by name. In selecting these firms, we used regional business directories to identify the largest firms, in terms of sales and number of employees, that were headquartered in or had offices in these two cities. Our selection criteria were that the firm (1) had at least 500,000 square feet of leased commercial office space; (2) had leased space in more than one geographic region of the United States; and (3) was willing to discuss its leasing approach with us and provide us information on its leasing practices. The 12 firms we selected are major players in the commercial real estate leasing market; 5 of them were on the 1993 Fortune 500 list, and the other 7 are recognized leaders in their respective industries. Also, 7 of the 12 firms we selected had leased space portfolios exceeding 1 million square feet. Using information obtained from the realty managers of these 12 firms, we compared their leasing procedures and practices with GSA’s in terms of how they (1) identify potential space; (2) establish award criteria; (3) determine whether to use commercial real estate agents or in-house real estate staff; (4) negotiate lease clauses, lease rates, and the costs of customizing office space; and (5) evaluate whether the lease rate is fair and reasonable. We discussed with responsible GSA program and legal officials (1) the results of our comparative analyses of GSA and private industry leasing practices; (2) the legal basis for and necessity of the GSA lease clauses and requirements that private landlords or real estate brokers/agents found most burdensome, cumbersome, or objectionable; and (3) any statutory provisions that would prevent GSA from adopting more expeditious, cost effective, and businesslike leasing practices. Also, we identified, considered, and discussed with GSA leasing officials several actions the agency has taken in the last 3 years to improve its leasing process and various pilot projects and other changes it is exploring or considering, in response to NPR, to “reinvent” or “reengineer” its leasing policies, procedures, and practices. We did our work between October 1992 and April 1994 at GSA’s central office in Washington, D.C., and its regional offices in San Francisco, CA; New York, NY; and Fort Worth, TX, in accordance with generally accepted government auditing standards. We discussed the results of our work with the Administrator and Deputy Administrator of GSA as well as other responsible GSA officials and considered their views in preparing this report. Also, we obtained GSA’s written comments on a draft of this report. GSA’s written comments are discussed at the end of chapter 5 and reproduced in appendix I. An extraordinary example of bureaucratic red tape. Too complex, absurdly slow, and frequently ineffective. Relies on rigid rules and procedures; extensive paperwork; detailed design specifications; and multiple levels of review, inspections, and audits. Not achieving what its customers want. Ignores its customers’ needs, pays higher prices than necessary, is filled with peripheral objectives, and assumes that line managers cannot be trusted. Its complexity forces businesses to alter standard procedures and raise prices when dealing with the government. So process oriented as to minimize discretion and stifle innovation. These statements were made by the National Performance Review (NPR) about the overall federal procurement process. These same statements also characterize the General Services Administration’s (GSA) leasing process. Historically, GSA’s leasing policies, procedures, and practices and the laws and federal regulations that guide them have been focused on process rather than on results. Over the years, procedural control after procedural control was added to GSA’s leasing process in response to GAO and Inspector General audits, congressional concerns, and the laudable goals of ensuring compliance with overall federal procurement rules and regulations, safeguarding the government’s interests, and minimizing fraud, abuse, and the number of bid protests by unsuccessful offerors. Such procedural controls are important and useful provided they are balanced with efficiency and effectiveness and do not cause organizations to lose sight of their basic missions. In the leasing area, however, the cumulative result of these well-intended procedural controls is a leasing process that has become rule-focused and inflexible, complex and cumbersome, and time consuming and costly. Our work showed that GSA’s process-oriented approach does not work very well in the dynamic commercial marketplace. It does not enable GSA to respond quickly enough in today’s competitive real estate environment and impedes its ability to get the best available leasing values. We identified several characteristics of GSA’s leasing process that seem to put GSA at a distinct disadvantage in the commercial marketplace, cause it to pay more than necessary for leased space, impede timely space delivery, and discourage competition for government leases. As discussed in chapter 4, GSA recognizes that its leasing process takes too long, is too costly and inefficient, and inhibits its ability to compete effectively for good leasing values in today’s dynamic commercial real estate market. To help overcome these disadvantages, GSA has initiated several actions aimed at streamlining its leasing process, reducing procedural controls that are within its administrative authority, and improving its leasing performance. In response to NPR, GSA is exploring other changes to reengineer its leasing policies, procedures, and practices. Office space is a unique commodity. Each building has a different combination of attributes and amenities, and commercial lease rates are influenced by various factors, such as the overall business and real estate conditions, location and quality of the building, length and size of the lease, and cost of customizing space to meet tenants’ needs. Also, other factors, such as superior window views, influence lease rates. Since leased space is continually coming on and going off the market, getting a good real estate leasing deal depends heavily on being postured to seize available market opportunities. However, the process-oriented nature of GSA’s leasing approach makes it difficult for GSA to move quickly. GSA’s approach is at odds with the dynamic commercial real estate market that rewards—with low lease rates and good leasing values—those who move quickly, are aggressive and innovative, seize available opportunities, and negotiate the best deals. It impedes GSA’s ability to get good, timely leasing values in the highly competitive commercial marketplace. As mentioned earlier, GSA’s leasing policies, procedures, and practices historically have focused on process rather than results. According to responsible GSA officials, this focus occurred because GSA employees were concerned that any noncompliance with established procurement rules and regulations, fraud or abuse, bid protests from unsuccessful offerors, or other public criticism implied weaknesses in management controls or poor agency performance. In response to GAO and Inspector General audits; congressional or media criticisms over the years; and the laudable goals of minimizing adverse audit findings, fraud and abuse, and the number of bid protests; additional procedural safeguards and controls were added to more fully protect the government’s interests. To help ensure compliance with established federal procurement rules and regulations and avoid bid protests, GSA’s leasing process emphasizes full and open competition for federal leases and fair and equal treatment of all potential bidders. In accordance with the Competition in Contracting Act (CICA), GSA’s policy is to explicitly lay out federal space requirements when soliciting bids and choose among competing offers strictly on the basis of specific, established award criteria. Because firms can protest procurement decisions if they feel they have been treated unfairly, GSA devotes much of its time and efforts to ensure that they are treated fairly. During the 5-year period covered by our review—1988 through 1992—GSA had an elaborate, time-consuming process for leasing office space and obtaining internal GSA and external reviews and approvals of proposed lease solicitations and agreements. As illustrated in appendix II—an 11-page flowchart—GSA’s leasing process included hundreds of steps and involved dozens of independent reviews and checks. GSA realty specialists said that they have limited discretion to diverge from the prescribed process. According to them, their role is to understand the process, work within it, and ensure compliance. Typically, all steps must be completed before a lease contract can be signed. Depending on the value of the lease, GSA realty specialists had to obtain information or approval from as many as 14 different offices. In San Francisco, for example, leases costing more than $1 million required an appraisal of the value of the proposed lease and preaward approval from GSA’s Office of Inspector General, Regional Counsel, and the Regional Acquisition Management Staff, as well as the Department of Labor’s Office of Federal Contract Compliance Programs. To facilitate these preaward reviews, GSA realty staff is to copy a complete record of the proposed lease, which can involve boxes of material, and provide it to reviewing offices. Many of GSA’s lease clauses, provisions, and specifications are strictly controlled as a result of law, executive order, or external regulation or are standards that have been requested by the customer agency. As a consequence, GSA realty specialists have limited flexibility to take advantage of available real estate market opportunities. GSA specifies leased space requirements, as well as the criteria that will be used to award a lease, months before soliciting offers from landlords. Once the leasing process has begun, GSA realty specialists have limited flexibility to modify space requirements or award criteria to take advantage of available market opportunities, even those that they believe could be extremely good deals. GSA realty specialists are concerned that any change could be construed as unfair or detrimental to or by some potential landlords and could result in bid protests. For example, if GSA receives two comparable offers and one building has a fitness center but the other does not, GSA cannot consider this additional amenity in selecting the winning bid, regardless of its desirability or value, unless one of the award criteria was having a fitness center. Similarly, GSA realty specialists have limited flexibility and have been reluctant to modify standard lease clauses and provisions, even those within GSA’s discretion. GSA’s detailed space and geographic location requirements as well as the criteria for awarding a prospective lease are specified in a document called the solicitation for offers. GSA formally advertises these office space leasing requirements and provides the solicitation to commercial landlords or their broker representatives who may be interested in competing for federal leases. Because the solicitation specifies all requirements so that potential landlords have full knowledge of what they are bidding on, it is complex and lengthy. GSA’s standard lease solicitation contains about 40 pages. It contains at least 12 pages of general information about GSA’s leasing process and space needs, such as a description of the amount, type, and location of needed space; the award factors; and how to prepare and submit offers. This is followed by 26 pages of technical specifications covering various matters, such as general architectural standards and interior finishes; mechanical, electrical, and plumbing systems; services, utilities, and maintenance; and safety and environmental requirements. Some of the specifications in the solicitation are basic and straightforward, such as requiring walls around elevator shafts and rest rooms or that work completed in connection with the lease be done by skilled workers and mechanics. Others are more technical and esoteric and highly prescriptive. For example, GSA’s standard lease solicitation specifies the carpet pile yarn content, carpet pile construction, pile weight, secondary back density, carpet construction, and static buildup for carpet tiles installed in the space; defines acceptable noise levels in terms of a minimum ceiling noise reduction coefficient and a minimum ceiling and partition sound transmission class and includes more prescriptive noise specifications when such requirements are of particular concern to the customer agency; contains seven pages of handicapped access standards; and establishes a janitorial service schedule that details what must be done daily, 3 times a week, weekly, every 2 weeks, monthly, every 2 months, 3 times a year, twice a year, annually, every 2 years, and every 5 years. This rigid, highly prescriptive approach carries over into GSA’s standard lease. GSA’s standard lease incorporates, in full, the 40 pages of general information and technical specifications that were included in the solicitation, plus varying numbers of additional pages of lease provisions and specific requirements—such as performance, ethics, and labor standards as well as any other requirements that include detailed specifications on how GSA wants the landlord to customize the space—that are unique to the subject lease. As with the solicitation, trying to cover all possible contingencies results in a complex and lengthy lease document. For example, the 34 leases we sampled averaged 90 pages. According to GSA’s Office of General Counsel, much of this length results from lease clauses GSA has adopted administratively that are not specifically required by law. The standardized lease that GSA used for many of the 34 leases we sampled contained well over 100 lease clauses that were designed to comprehensively protect the government’s interests. Some clauses—such as those giving the government authority, without penalty, to change tenants—transfer risk to the lessor. GSA revised its standardized lease in August 1992 and eliminated or shortened some lease clauses. However, it did not (1) determine whether all standard lease clauses are needed, (2) identify how often particular clauses are actually being used, or (3) target lease clauses that may cost more than they save or otherwise need to be reexamined or reconsidered. Besides a rigorous and rigid leasing process, GSA relies on its realty staff to identify available space for lease, solicit offers, and award leases. However, GSA may not have enough leasing activity in particular markets for its realty staff to remain sufficiently knowledgeable of current market conditions and trends, space availability, or good leasing values. In San Francisco, for example, GSA awarded only three leases in fiscal year 1992 and four in fiscal year 1993. Also, as discussed later in this chapter, GSA lacks a complete and useful automated database on current commercial realty activities and rates. Without such data, GSA cannot effectively monitor market trends or evaluate the offers it receives from prospective landlords. On 34 leases we sampled that were awarded in San Francisco, New York, and Dallas between 1988 and 1992, GSA took an average of about 20 months to deliver office space to the requesting federal agency. As table 2.1 shows, the amount of time GSA took to deliver space on the sampled leases, from the date of the agency’s request for space until the date the space was available for agency occupancy, ranged from a low of 4.8 months to a high of almost 66 months. We did not make or obtain our own independent market real estate appraisals for the 34 GSA leases we sampled. However, GSA had made or obtained a market real estate appraisal for 24 of these leases before lease award. GSA’s own price determinations acknowledged that the rates it paid for at least 10 of these 24 leases exceeded their fair market values as established by these appraisals—2 leases were between 10 and 16 percent higher, 3 of them were between 5 and 10 percent higher, and the 5 others were higher by 5 percent or less. GSA’s stated reasons for awarding these leases at a higher rate were that there were no alternative competing offers and an urgent need to get the federal tenant agency into new space. Using GSA’s appraisals for these 24 leases, we attempted to compare the lease rates GSA paid with those paid by the private sector. However, we were not able to make conclusive comparisons because the appraisals did not contain enough information for us to determine whether (1) the private sector leases that GSA’s appraisers used were valid comparables or (2) the adjustments that GSA’s appraisers made to account for differences in the terms and conditions of the leases and the quality, exact location, and amenities of the space involved were appropriate and reasonable. In its October 19, 1994, written comments on a draft of this report, GSA said it did not understand why we could not use its appraisals to make conclusive comparisons. We could not make conclusive comparisons because GSA’s appraisals for these leases did not contain enough data and supporting documentation to permit us to independently verify their accuracy and validity. The 82 commercial landlords and brokers we contacted that GSA had solicited for offers on the 34 sampled leases generally were highly critical of GSA’s leasing process, and many of them said that GSA pays too much for leased space. They characterized GSA’s leasing process and leases as overly prescriptive and bureaucratic, confusing and time consuming, contrary to commercial real estate practices, and transferring excessive risks to the lessor. Consequently, these landlords and brokers said that they are reluctant to compete for GSA’s leases. Many of those who do compete said that they increase their rental rates in order to compensate for the uncertainties, added risks, and administrative red tape they perceive are implicit in doing business with GSA. Over one-half (45) of the 82 commercial landlords and brokers we contacted specifically said that GSA pays inflated rental rates on its leases. Basically, they said that GSA’s leasing approach and process cause it to pay more than necessary for leased space. They generally attributed this to GSA’s rigid, bureaucratic, and time-consuming leasing process and resulting standardized leases, which they view as cumbersome, confusing, and lengthy. Table 2.2 shows the specific reasons these 45 landlords and brokers cited for this belief and the number of landlords or brokers that cited each reason. Many of them cited more than one reason. The landlords and brokers we contacted said that they typically increase their proposed rental rates to GSA to compensate for perceived added risks because they do not understand the cost implications of many of GSA’s standard lease clauses, technical specifications, or space build-out requirements. Specifically, 27 brokers and landlords said that GSA pays too much for leased space because of the way the agency approaches space customizing (buildout) to meet the federal tenant agency’s specific needs. Similarly, 24 landlords and brokers said that GSA pays more because it insists on using a standard lease and 16 said that GSA pays too much because its solicitations are confusing. Finally, 24 of them specifically said that GSA pays more because many landlords and brokers increase their rates to compensate for the time and effort involved in working their way through federal paperwork requirements and bureaucracy. Those landlords and brokers who said that GSA was getting reasonable rates attributed this to a soft commercial real estate market. However, they too cited several characteristics of GSA’s leasing process and leases that they said tend to increase federal lease rates. The commercial landlords and brokers we contacted also said that GSA’s procedures for customizing or building out leased space to meet federal tenant agency requirements—referred to as the leasehold improvement process—add to the length and complexity of GSA’s standard lease and also transfer risk to the owner. GSA expects prospective landlords to estimate these costs and include them in their bids but does not provide architectural plans. Since customizing the space to meet the federal agency’s specific requirements can be expensive, landlords are faced with considerable financial uncertainty. This uncertainty is heightened by the fact that some special federal space requirements, such as bullet-proof glass for the Secret Service’s offices or secure weapons storage facilities for law enforcement agencies, are uncommon in the private sector. Thus, landlords generally are unfamiliar with the costs of such special federal requirements. To compensate for the uncertainties and risks that are inherent in building out the space to GSA’s specifications, many of the commercial landlords and brokers we contacted said that they increase their proposed rental rates to GSA. Full and open competition for GSA’s leases is designed to ensure that all responsible sources are allowed to compete. All competitors must be provided the same information and judged on the same criteria. Competition also serves as the government’s primary price control mechanism. However, GSA had relatively little competition for the 34 leases we sampled, and over 90 percent of the 82 commercial landlords and brokers we contacted said that GSA’s highly prescriptive and process-oriented approach discourages competition for government leases. Our review of 34 GSA leases in San Francisco, New York, and Dallas showed that many brokers and landlords who were invited to compete for them did not respond. On these 34 leases, GSA issued a total of 261 solicitations to 167 brokers or owners but received only 67 responsive offers. As table 2.3 shows, GSA had only one or two responsive offers to consider for 71 percent of these leases. To illustrate limited private sector response to GSA’s leases, GSA issued a lease solicitation in May 1988 for about 40,000 square feet of space in New York to 11 prospective brokers or building representatives, some of whom represented more than 1 building, but received only 2 offers. GSA said this was because the real estate market was strong at that time, and many landlords simply were not interested in doing business with the government. Of the two landlords who submitted an initial offer, one later withdrew from the competition because he objected to some of the requirements in GSA’s lease clauses. GSA eventually awarded this lease to the sole remaining landlord at a rate that was 16 percent above its own real estate market appraisal. As mentioned earlier, we contacted 82 of the 167 landlords or brokers that GSA solicited for offers on the 34 leases we sampled. Almost all of them (94 percent) said that GSA’s process discourages competition. While each landlord or broker we interviewed had a slightly different story to tell about why GSA’s leases attract limited competitors, many of them cited dissatisfaction with some aspects of GSA’s leasing process and acknowledged that this affects the nature and degree of competition as well as the lease rates that GSA pays. Table 2.4 shows the reasons landlords and brokers cited for limited competition for GSA’s leases. Of the landlords and brokers we contacted, 68 percent said they are reluctant to compete for federal leases because of the bureaucratic and time-consuming nature of GSA’s process. They said that GSA’s numerous internal reviews, coupled with the unfamiliar and time-consuming paperwork requirements, make leasing to the federal government a very frustrating experience. Also, they said that GSA is reluctant to modify or eliminate lease clauses to recognize their concerns. One broker commented, “Everyone’s involved in the process, but no one can make a decision.” Forty-three percent of the brokers and landlords we contacted said that GSA’s lease solicitation discourages competition. They said that GSA’s lease solicitation can be very daunting because of its size and complexity. For example, one landlord commented that as soon as he signed the lease he was probably out of compliance because of the large number of GSA requirements involved. Also, these landlords and brokers said they find GSA’s lease solicitation to be confusing because many of its requirements and specifications are highly technical, esoteric, and differ from commercial market norms. For example, one broker said that parts of GSA’s solicitation are so technical they are beyond the average broker’s comprehension. Similarly, 43 percent of the landlords and brokers we contacted objected to GSA’s standard lease. They find it unresponsive to their concerns and difficult to understand and comply with. Many of them said that GSA’s standard lease clauses are cumbersome and confusing because such clauses generally do not exist in private sector lease contracts. Also, they pointed out that the effect of some GSA clauses, such as those giving the government the option (without penalty) to terminate the lease unilaterally after giving the owner notice or to change tenants, is to transfer risk to the lessor. Another clause frequently mentioned as problematic was the right to substitute other federal tenants. Landlords are concerned that GSA might transfer an agency into their building that would not be consistent with the building’s character, such as a law enforcement agency in a downtown office building, and that they would have little, if any, say in this. Since landlords can maximize their profits by securing tenants quickly, they said they prefer to rent to businesses that typically move into space and begin paying rent much quicker than GSA. Some of them said they simply refuse to do business with GSA. A few brokers said that they would compete for a GSA lease only if they had no other prospective tenant. Other brokers said that landlords may rent to commercial tenants during the lengthy period GSA’s leasing decision is pending, and this also can reduce GSA’s options in choosing prospective buildings. Landlords’ and brokers’ general reluctance to do business with GSA could worsen. Brokers we contacted noted that, during this soft real estate market when buildings are partially vacant and there are few other potential tenants, landlords generally are willing to rent to GSA because they are desperate for tenants. However, they said that GSA will be at a greater disadvantage when the real estate market improves. Although there is no standard private industry leasing model, and practices differ from firm to firm, the practices of the 12 major private sector firms with large portfolios of leased office space that we contacted share several common characteristics that seem to help them take advantage of available market opportunities and lease space quickly. This chapter describes these firms’ leasing approach and practices on the basis of interviews with their realty managers, who willingly provided us with information about the firms’ leasing activities. Basically, these 12 private firms are results oriented, take a flexible and practical approach to leasing, and treat each lease as a unique case. Their leasing processes and practices generally are simpler, less time consuming, and more cost efficient than the General Services Administration’s (GSA). In contrast to GSA, for example, they do not (1) establish prescriptive, detailed technical specifications or (2) require extensive, multilevel reviews of proposed lease contracts. These firms rely on the expertise of their in-house realty staffs or commercial brokers to lease space and are willing to modify their requirements and negotiate trade-offs with landlords to quickly conclude a deal. For these and other related reasons, the realty managers we contacted said that they believe their firms and most other private sector firms generally get better overall leasing values than GSA. As discussed in chapter 2, this belief is shared by many of the landlords and brokers GSA solicited on 34 sampled leases. The 12 private firms we contacted focus on results rather than on the process when leasing space. Because leases have a direct impact on profitability and productivity, their major concern, according to the realty managers we contacted, is to quickly obtain space that meets their operational needs and at a competitive rate. Most of these realty managers said that their firms’ total leasing process—from identifying the space need to occupying leased space —typically takes 6 months or less. Rather than establishing mandatory guidelines or prescribing step-by-step procedures, these private firms typically rely on the expertise of their leasing staffs or commercial brokers and flexibility to lease needed space. They do not place requirements on their realty managers that may impinge on the firm’s ability to achieve results. For example, one realty manager explained his firm’s rationale for avoiding excessive controls over the leasing process. This realty manager said that rigid procedures only increase paperwork and discourage staff from taking initiative and responsibility in meeting space needs. In addition, he said that if leasing procedures are excessively prescriptive, his staff may become overly concerned with following procedures rather than pursuing the real objective of quickly obtaining space and at a competitive rate. According to the private industry realty managers and commercial brokers we contacted, getting a good value in real estate often heavily depends on being postured to seize market opportunities as they appear because good lease opportunities can and do come on and go off the market quickly. Although the realty managers we contacted said that some private firms continue to rely on their in-house realty staffs to identify buildings and negotiate lease terms with prospective landlords, they said that using commercial brokers has two advantages, which are (1) gaining access to the brokers’ market knowledge and information networks and (2) reducing staffing costs. According to these realty managers, most brokers develop and maintain extensive databases on recent lease transactions. As a result, these managers said that most brokers have complete, reliable, and up-to-date information regarding recent actual lease rates, terms, and rental concessions in various geographic areas. Such databases give brokers easy access to the kind of information needed to assess overall market trends, plan negotiation strategies, and evaluate proposed lease terms. These realty managers said that such information allows brokers to negotiate more aggressively for lower lease rates and rental concessions. Also, these realty managers said that private sector firms have found they can reduce their costs substantially by using commercial brokers to lease space for them in lieu of having their own large, full-time realty staffs. In the commercial real estate industry, brokers generally earn their commissions from landlords for locating tenants and negotiating leases. Although brokers’ fees are not paid directly by private sector firms, the realty managers we contacted said that brokers would still secure good deals for them to ensure their future business. Some firms we contacted have given exclusive leasing rights to specific brokers so that they will become more familiar with the firms’ operational needs for space. The leasing practices of the 12 private firms we contacted are simpler, more straightforward, and less rigid than GSA’s because they generally do not establish mandatory guidelines or prescribe step-by-step procedures for leasing space. Similarly, these firms do not require their realty staffs to develop detailed technical specifications before soliciting offers from potential landlords. Instead, their realty staffs typically meet with commercial brokers to discuss, in general terms, the amount and type of space needed. Their strategy is to be flexible and practical, determine what space is available that could meet their needs, and adjust their space requirements, if necessary, to get the best available leasing value. This nonprescriptive approach is more sensitive to landlords’ concerns since it does not impose as many tenant requirements or risks on them. According to the realty managers we contacted, once commercial brokers understand the firm’s space needs and constraints, including location and budgetary considerations, they begin to identify potential buildings that could meet these needs. These managers said their brokers are very knowledgeable about the commercial real estate market and generally identify potential buildings quickly. Brokers provide firms with a technical review of each building that potentially could meet their needs, including studies of building infrastructure and building systems as well as space efficiencies and workflow. When initial offers are received from potential landlords, brokers prepare a financial evaluation of each of the offers, identifying the ones that offer the best overall value. After the firm selects the building it wants, brokers try to negotiate for a better lease rate or for additional concessions. If the broker cannot successfully conclude negotiations for this building, he/she negotiates for the next most favorable building. The brokers we contacted emphasized that landlords do not want to waste their time and resources pursuing a potential tenant unless they have a realistic chance of getting the lease. Thus, brokers generally zero in on a few buildings after their initial survey of the space available in the market. Typically, one building emerges as offering the best overall deal, and the firm lets the landlord or broker know that this building is under serious consideration. Typically, these private firms let the landlord or their broker know what is needed to make the deal acceptable and whether the landlord or broker has a good chance of getting the lease. Some of these firms sign a letter of intent that, although not legally binding, signifies a serious commitment to negotiate a deal. Thus, landlords or their brokers are encouraged to spend the time necessary to put together a deal because they realize they have a good chance of getting the lease. Rather than risk losing an interested client, the realty managers of the 12 firms we contacted believe that landlords tend to be more willing to improve the lease package by offering additional concessions. These 12 private firms also recognize and are sensitive to landlords’ legitimate concerns about financial and other risks they incur when leasing space to tenants. The realty managers we contacted said that landlords are as eager to find a responsible tenant as tenants are to find a cooperative landlord. These managers also believe that the pressures of a competitive marketplace keep landlords from making unreasonable demands of their tenants. These managers said that sensitivity to landlord’s needs is part of being a good tenant and that it helps set a cooperative tone for future dealings. Rather than being dogmatic about their needs and leasing procedures, these private firms generally are willing to modify their requirements if this approach will result in a lower lease rate, a speedier transaction, or promote a better business relationship with the landlord. For example, a firm that prefers indoor parking will accept a building that can only provide outdoor parking if indoor parking is scarce or commands a high price in the marketplace. Similarly, an unexpected amenity, such as free parking, may be the factor that causes a firm to make a deal for a particular building. The 12 private firms we contacted said they also minimize the number and nature of internal reviews of proposed leases and that their reviews of proposed leases normally take 2 weeks or less. When these firms and their landlords reach a tentative agreement, the lease is reviewed at higher levels in the firm to determine its acceptability from a legal and business perspective. The legal department reviews the lease to determine if the landlord is shifting an unreasonable amount of risk to the firm and may add or modify lease clauses to protect the firm’s interests. Similarly, a high ranking firm official typically reviews the proposed lease from a business perspective to assess the impact it is likely to have on the firm’s ability to achieve its business objectives. In its business review, the firm looks at items such as the lease rate, the building’s location, the length of the lease, and whether the space meets the firm’s operational needs. The leasing requirements and lease agreements of the 12 private sector firms we contacted usually are stated in relatively general terms. These firms typically use the landlords’ lease, which generally conforms to customary and prevailing commercial real estate practices and terminology. Although these firms have large inventories of leased space, they seldom impose their lease contracts on the landlords. Their realty managers acknowledged that private firms would be better protected if they were to use their own lease. However, they said that imposing their lease on landlords would increase the time needed to close the deal, increase the lease rate, and discourage some landlords from leasing to them. In effect, these private firms are willing to trade an increase in risk to them for timeliness, lower rates, and increased competition. These realty managers said that using the landlord’s lease is a common and accepted practice in the private sector. They said that landlords prefer to use their own lease because it generally is standardized for all tenants in the building, and this gives landlords a greater sense of control. These realty managers said that private firms generally accept the landlord’s basic lease provided they can modify certain clauses as necessary to protect their interests. Although the wording of leases differs from landlord to landlord, these managers said that commercial leases generally cover standard items and requirements. Many of these realty managers said it is impossible to write a lease that can anticipate and prevent all problems. They said that serious disagreements with landlords seldom occur and most minor problems can be resolved informally through discussions. If a serious disagreement does occur, the wording of the lease will not prevent the tenant or landlord from seeking legal remedies. As a result, many of the realty managers we contacted believe a lease that is too prescriptive and overly protective of the tenant will only raise landlords’ concern and slow down lease negotiation. Thus, these managers said they resolve problems during lease administration rather than trying to anticipate and preclude them in the lease contract. Typically, the lease contracts used by the 12 private firms we contacted are relatively short—less than 40 pages. Many of their terms and requirements are straightforward and not specified in great detail. For example, the lease may state in general that the landlord is responsible for providing repair and maintenance in a timely manner unless the tenant damages the property intentionally or through negligence, the tenant may not sublet the space without the landlords’ consent, and that the landlord is responsible for meeting the requirements of the Americans with Disabilities Act. To help simplify the lease negotiation process, save time, and hold down leasing costs, the 12 private firms we contacted said they typically follow the common commercial real estate market practice of negotiating a tenant improvement allowance with the landlord. According to these firms, the use of such allowances typically enables them to customize or build out the space and have it ready for occupancy in 16 weeks or less. Unlike GSA, these firms do not ask landlords to assume the risks of customizing the space according to their operational needs. They said that without investing considerable time and effort in having their own engineers or independent contractors review the tenant’s proposed detailed floorplan and specifications, landlords do not know how much it will cost them to customize the space for the tenant. Space build-out allowances limit landlords’ total cost exposure. For example, an allowance of $25 per square foot for a 10,000 square foot lease means that as part of the rental rate the tenant is entitled to space customizing improvements costing up to $250,000. Regardless of the ultimate costs of customizing the space, landlords are committed to pay only the first $250,000. The realty managers of these 12 private sector firms said they try to stay within the allowance limit. If these firms use less than the amount of the build-out allowance, however, landlords typically credit the balance toward their rent. Thus, many of them said they will accept existing build-out or modify their requirements to lower their leasing costs. The General Services Administration (GSA) has acknowledged that its leasing process is too time consuming, costly, and cumbersome to permit it to compete efficiently and effectively in today’s dynamic commercial real estate market. In the 1990s, GSA has initiated several actions aimed at streamlining its leasing process and improving its leasing performance. In response to the National Performance Review (NPR) and a recent initiative by the President, GSA is exploring other changes in existing leasing policies, procedures, and practices to improve its overall leasing efficiency and effectiveness. Also, recent legislation may encourage and facilitate improvements in GSA’s leasing process and performance. GSA’s inability to lease and deliver space to federal agencies in a timely manner is not new. It is well documented in a series of GSA studies and reviews dating back to the late 1970s when criticisms of its performance first began to build. As early as 1978, agencies were complaining about GSA’s leasing monopoly, the timeliness of its leasing process, and inconsistencies among GSA regions in responding to their requests for space. Over the years, agencies’ complaints about GSA’s time-consuming process and preoccupation with competition and procedural requirements at the expense of service delivery have become routine. “The space delivery process is unfocused, inefficient and getting worse. The process is too slow, too confusing, and a source of frustration to both our customers and the realty specialists who are primarily responsible for the product . . . Perhaps the most disturbing implication of all is the pervasive tone of defeat among so many of the participants in the space delivery process. There is a widespread sense that no one can change the process or make it work faster. We seem to have lost our will to succeed. The truth is that we can and unless we make substantial improvement in performance soon our customers are going to mount a successful drive to obtain leasing authority.” In 1988, GSA noted that it took an average of 307 days to deliver requested space to federal agencies, compared to an average of 239 days in 1977. Although GSA’s study emphasized that the exact causes for this increase could not be empirically determined, it noted that space requirements have become more sophisticated, the process more complicated and technical, and regulation has increased. This study made several recommendations for change that were aimed at improving GSA’s overall leasing performance. Most of the recommended changes were within GSA’s direct control. In response to the 1988 internal study, GSA undertook three initiatives aimed at improving the efficiency and effectiveness of its leasing activities—streamlining procedures for certain small leases, automating its database on commercial real estate market activities, and experimenting with the use of commercial brokers. Also, as indicated earlier, GSA eliminated or shortened some standard lease clauses in August 1992 but did not systematically reassess the basis for and continuing need for each standard lease clause. More recently, GSA has reduced some procedural controls over leasing that are within its administrative authority and changed its method of measuring space to conform with typical private sector practice. In August 1991, GSA established streamlined procedures for leases (1) under 10,000 square feet or (2) with a total cost of less than $25,000 or a term of less than 6 months. Since leases under 10,000 square feet comprise over 70 percent of GSA’s inventory, this expedited process was designed to be a faster, less complicated way for GSA to handle the majority of its leasing transactions. GSA intended to make this expedited process less formal, relying more on the expertise of realty specialists than on detailed specifications and contract requirements. However, GSA found that this expedited process is suitable only for existing space that can meet agency needs with minimal alterations. Therefore, it may be applicable only to a few small leases. Social Security offices tend to be under 10,000 square feet, for example, but they typically require a number of alterations to accommodate a high level of public contact and the needs of an elderly clientele. GSA monitored the use of this expedited leasing process for the 13-month period ended September 30, 1992, and found that it was used to award 280 leases in an average of about 57 days. GSA believes that the expedited process has been successful in reducing the amount of time it takes to award a lease. However, GSA has not determined this new expedited process’ overall effectiveness in reducing the total amount of time it takes to deliver occupiable space to the requesting federal agency. Also, GSA did not determine how often it could have used this expedited leasing process during that 13-month period. It should be noted that GSA no longer tracks or measures its overall leasing performance in terms of the elapsed times between agencies’ requests for space and agencies’ receipt of GSA-delivered space. Thus, GSA lacks complete data on its actual overall leasing performance, and it is not possible to compare its leasing performance today with the past. As part of its agencywide efforts under Total Quality Management to improve the responsiveness and quality of its mission-support services to federal agency customers, GSA began emphasizing a goal of delivering space when agencies actually need it and tracking its performance in meeting these goals. Second, GSA has begun developing a realty database using market information gathered by GSA’s appraisers. Although GSA’s appraisers gather information similar to that gathered by commercial brokers, the data has been stored only in hard copy, making it difficult and awkward for GSA’s realty specialists to use effectively. Thus, realty specialists typically get space availability and lease cost information by doing a market survey, which involves physically visiting the area and buildings where space is sought and talking with building representatives to get the needed information. In 1992, GSA instructed its regional office staff to begin automating the data so that its realty specialists would have easy access to local market information, thereby improving their knowledge of market conditions. As of May 1994, GSA had automated some data in 4 of its 11 regions. However, GSA has put further automation efforts on hold until it finalizes its plans for incorporating NPR’s recommended principles into its leasing program. Third, GSA has experimented with contracting for commercial brokers’ services. GSA’s Philadelphia region has contracted with a nationwide brokerage company that specializes in representing tenants to provide space planning, appraisals, inspections, and real estate consulting services (such as market surveys, financial analysis, and technical support). Under this contract, the company was required to conform to all federal leasing rules and procedures. A key anticipated benefit was to use the private company to supplement GSA’s in-house staff when heavy workloads caused backlogs and prevented GSA from responding to agency needs in a timely fashion. According to a responsible GSA official, the region has used the private company primarily for appraisals and does not feel that the contract significantly reduced its leasing workload. GSA is leery of using the company to negotiate rates because of concerns about the potential for conflicts of interest. For example, GSA said that it would be highly vulnerable to a bid protest if the company awarded a GSA lease to an acquaintance and could not fully demonstrate that the deal was fair to all competing landlords. Also, GSA officials said that they were not satisfied with the results of earlier contracts the agency awarded to commercial brokers in its Kansas City region in 1990 and its Fort Worth region in 1987. GSA felt that these brokers did not adequately understand federal procurement requirements. These three GSA initiatives are steps in the right direction. However, they have not yet resolved federal agencies’ or the commercial real estate community’s frustrations with GSA’s leasing process. In a July 1993 testimony before the Subcommittee on Oversight of Government Management, Senate Committee on Governmental Affairs, the International Building Owners and Managers Association noted that commercial landlords and brokers remain frustrated by GSA’s complicated and convoluted real estate process, which discourages competition for federal leases. “Current statutory focus on process as a means to ensure ethical standards, fairness, economy, and efficiency has, in part, resulted in a system which is highly regulated, customer insensitive, slow to innovate, and slower to deliver.” “ . . . concepts of full and open competition, level playing field, maximizing sources, and removing barriers to competition reflect the system’s dominant concern with fairness to potential offerors. Although fairness is certainly an important value in public management, it may become an impediment to effective management when it places contractor’s interests ahead of the purchaser and the taxpayer.” In its October 19, 1994, letter providing written comments on a draft of this report (See app. I.), GSA identified two additional actions it has taken to improve its leasing process. These actions are (1) changing its method of measuring space to conform with typical private sector practice and (2) reducing some procedural controls over leasing that are within its existing administrative authority. Prior to June 1, 1994, GSA acquired space using the “net usable” measurement system, as opposed to the “rentable” measurement system typically used by the private sector. Effective June 1, 1994, GSA began to acquire space using the local “rentable” measurement system. GSA pointed out that the rentable measurement system produces a lower square-foot rental rate than the net usable measurement system. According to GSA, the GSA vs. private sector leasing value comparisons made by commercial landlords and brokers and private industry realty managers, which were discussed in chapters 2 and 3 respectively, may not have taken into account the different methods of measurement previously used by GSA and the private sector. Also, GSA said it did not believe that the commercial landlords and brokers we contacted took into account that GSA for over 20 years has been a leader in the implementation of laws and regulations and agency initiatives that require accessibility to the handicapped and adherence to strict fire and life safety standards in leased space. GSA acknowledged that these requirements often increase its leasing costs but said that they provide a value and quality of space that is expected and appreciated by its customer agencies. A few of the commercial landlords and brokers and private industry realty managers we contacted specifically mentioned GSA’s method of measuring space and its strict building accessibility and fire and life safety standards as factors that contribute to GSA paying higher lease rates. Typically, however, the landlords, brokers, and realty managers we contacted included individual factors such as these as part of their overall criticism of GSA’s lengthy and confusing lease solicitations and standard lease clauses. Accordingly, we summarized their overall criticisms in the draft report that GSA commented on and did not specifically mention space measurement differences, fire/life safety standards, or other individual contributing factors. Additionally, GSA’s October 19, 1994, letter pointed out that, in the 1990s, it has reduced, not increased, procedural controls that are within its authority. For example, GSA said that it has (1) increased the dollar threshold for leases that require GSA Inspector General review from $200,000 annual rent to $1 million annual rent in its National Capital and San Francisco regions and to $400,000 annual rent in all other regions and (2) eliminated the requirement for Office of Acquisition Policy, Office of General Counsel, and regional acquisition management staff preaward reviews of proposed leases. However, GSA emphasized that there have been no reductions in procedural controls that are mandated outside of the agency and that it has seen no movement toward such reductions. In chapters 2 and 5 of this report, we acknowledge that such externally imposed procedural controls also exist and need to be reexamined. In addition, we recognize in chapter 2 that NPR was especially critical of excessive federal procurement system rules, regulations, and procedural controls. Later in this chapter, we discuss recently enacted legislation that may encourage and facilitate reductions in procedural controls and improvements in GSA’s leasing process. In response to NPR, GSA committed itself to and developed plans for ending its long-standing service monopolies, separating its policymaking and oversight responsibilities from service delivery, revising its organization to improve how it interfaces with customer agencies, and using private sector practices as benchmarks to reengineer the way it does business. Also in response to NPR, GSA proposed total cost savings of $693 million in the leasing area in its March 1994 report on the results of its “Time Out and Review” of major approved public building new construction, modernization, and leasing projects. Of 64 leasing projects that GSA reexamined under its time out and review initiative, it proposed savings reductions on 26 of them—2 had savings of $103 million from lease cancellation, 19 had savings of $590 million from leased square footage reductions, and 5 had savings that are to be determined through renegotiation. Also, GSA identified 19 major space requirements now satisfied by leased space where it believes that conversion to government ownership should be considered because it potentially could save hundreds of millions of additional federal dollars. GSA has committed to “reinventing” itself so that it can provide better services to its client agencies and, ultimately, the taxpayer. Leasing is one area where GSA is exploring needed changes and alternative ways of doing business to more fully satisfy federal agencies’ mission-support needs. Two of its regional offices—Denver, CO and Auburn, WA—are involved in this effort. Within the limitations of the Competition in Contracting Act (CICA) and other statutory provisions, GSA has empowered these two regional offices to identify and experiment with various leasing innovations. GSA’s objective is to collect enough data to document and validate the success or failure of each effort, identify specific factors that affected its success or failure, and determine which innovations worked. When these reinvention laboratory efforts began in September 1993, responsible GSA officials said that each new and existing lease would be considered as a potential candidate for testing specific changes in its traditional leasing process and standardized lease solicitations and agreements. Aspects of GSA’s leasing process that it has identified for testing include (1) waiving certain General Services Acquisition Regulation provisions, (2) reducing required documentation, (3) simplifying transaction forms, (4) working closer with federal agencies to develop their space requirements, and (5) delegating lease acquisition authority to selected federal agencies. Also, GSA plans to test private sector practices for leasing space. GSA regional participants said that it will take at least 2 years to accumulate enough evidence for GSA to draw any meaningful conclusions from these experiments. To date, the Denver region’s reinvention efforts have focused on analyzing GSA’s leasing process from three perspectives—federal customer agencies, the commercial real estate community, and GSA’s guiding policies and procedures. The region has used surveys, focus groups, and meetings to better identify and understand federal agencies’ space needs. Similarly, the region has interviewed several local real estate brokers and private developers to discuss GSA’s leasing process and how it can/should be improved. Finally, the region has examined GSA’s traditional leasing policies, procedures, and practices and researched the basis for each GSA step in the leasing process. According to an interim paper that the Denver region prepared in April 1994, its reinvention efforts to date have produced promising results. For example, the region reported reductions in the time required to complete major leasing steps, such as developing space requirements, surveying the marketplace, and preparing and negotiating the solicitation for offers. Also, the region reported benefits from the use of space layout drawings in lieu of the traditional quantified narrative requirements to communicate agencies’ space customizing requirements to lessors. In the policies and procedures area, the region found that most GSA leasing steps are not required by law but have evolved from past GSA practices. In this regard, the region concluded that GSA has continued to follow many of these institutional practices because they were convenient, and GSA employees—from regional realty specialists to central office legal counsel—had become comfortable with them. The region already has begun testing some process reengineering proposals and is considering the use of several others. As part of its reengineering efforts, GSA reorganized its Public Buildings Service (PBS) along business lines effective January 8, 1995, to separate its policymaking/oversight and service provider responsibilities and help facilitate the delivery of real estate services to federal agencies. PBS’s new organizational structure consists of (1) three policy and oversight components—Governmentwide Real Property Policy, Portfolio Management, and Business Development; (2) five service provider components—Property Management, Commercial Broker, Fee Developer, Federal Protective Service, and Property Disposal; and (3) three support components—Controller, Chief Information Officer, and Acquisition Executive. GSA’s leasing of federal office space is now handled by the Office of the Commercial Broker. Also in January 1995, in response to the President’s recent initiative to reduce the size of government and realize long-term cost savings, GSA announced plans to accelerate and broaden its ongoing reengineering efforts. GSA committed itself to identifying the most cost-effective method of carrying out each of its assigned mission-support responsibilities, including leasing, and seeking the authority to implement the most cost-effective solution. Also, GSA identified a number of potential internal and governmentwide long-term cost-savings opportunities in various support services areas and plans to establish—by October 1, 1995—a separate Office of Policy and Oversight to strengthen its capability to carry out governmentwide policy and oversight functions. The Federal Acquisition Streamlining Act of 1994 provides some of the tools needed to begin addressing the underlying problems with GSA’s leasing process that are discussed in this report. This act (P.L. 103-355, enacted on Oct. 13, 1994) authorizes simplified acquisition procedures for leases having an average annual rent of $100,000 or less and could result in performance improvements for GSA. Also, the act would entitle losing offerors to debriefings after an award, and these debriefings may reduce the number of bid protests. Furthermore, the act seeks to enhance the federal acquisition process through a wide-ranging set of performance-based management goals and incentives. However, GSA believes that two additional provisions that congressional conferees eliminated from the original legislative proposal—a succeeding lease provision and a two-step contracting provision that GSA intended to apply to its lease construction activities—could have facilitated further improvements in its performance. Finally, GSA could experiment with any needed related changes in federal procedural requirements and controls under the Government Performance and Results Act of 1993—P.L. 103-62. This act authorizes pilot projects for better performance goals and measurements and for increased managerial accountability and flexibility. GSA has been designated by OMB as one of the pilot agencies for performance plans and program performance reports and likely will also be a pilot agency for managerial accountability and flexibility. Under the act’s managerial accountability and flexibility provisions, established administrative procedural requirements and controls can be waived for up to 3 years, in return for specific accountability to achieve a designated performance goal. Participating federal agencies will have to demonstrate the expected effects on their performance resulting from greater flexibility, discretion, and authority and the improvements in performance resulting from the waiver. The expected improvements are to be compared to current and projected performance without the waiver. After 3 years, the agency can propose that the waiver be made permanent. In today’s commercial real estate market, good leasing opportunities come and go quickly, and getting a good value depends on being postured to seize market opportunities as they become available. However, the General Services Administration’s (GSA) highly prescriptive and process-oriented leasing approach—grounded in federal procurement law, uniformity, and numerous well-intended procedural controls added over the years—has become at odds with the dynamic commercial real estate market. It impedes GSA’s ability to get good, timely leasing values and may be causing the government to pay more than is necessary for leased space. Over the years, GSA’s leasing policies, procedures, and practices have become preoccupied with process at the expense of results, as numerous procedural controls were added to help (1) safeguard the government’s interests; (2) ensure compliance with federal procurement laws and regulations and other national policies; and (3) minimize fraud, abuse, and the number of bid protests. These goals are important, but the cumulative result of these well-intended procedural controls is a time-consuming and costly leasing process that does not work very well in today’s competitive commercial real estate market. GSA has begun reducing procedural controls that are within its authority but continues to focus primarily on process rather than results. In contrast, the more results-oriented approach that private sector firms typically use is much simpler, more flexible, and takes less time. The private realty managers and commercial landlords and brokers we contacted generally believe that this approach results in better overall leasing values. Although there is no standard private industry leasing model, and practices differ from firm to firm, the practices of the 12 private firms we contacted share several common characteristics that help them take advantage of available market opportunities and lease space quickly. For example, these private firms are focused almost exclusively on results, take a flexible and pragmatic approach and rely on the market expertise of their in-house realty staffs or on commercial brokers to lease space, are willing to modify their requirements to conclude an advantageous deal aggressively seek and negotiate bargains and concessions from landlords, minimize the number and nature of internal reviews of proposed leases. GSA recognizes the need to improve the timeliness and cost effectiveness of its leasing process, has already adopted streamlined procedures for certain small leases, and is exploring other changes in response to the National Performance Review (NPR) and the President’s recent initiative to reduce the size of government and realize long-term cost savings. Administratively, GSA could change some aspects of its leasing process that seem to discourage competition for its leases, impede timely space delivery, and contribute to higher than necessary federal leasing costs. For example, GSA could (1) simplify and streamline its standard lease solicitation and lease agreement, (2) adopt the private sector practice of negotiating a tenant improvement or space build-out allowance, and (3) finish developing a complete and useful automated realty database on commercial real estate market activities and prices. Such changes would be steps in the right direction. Alone, however, such changes would not (1) fully and effectively resolve the long-standing, systemic leasing problems discussed in this report or (2) result in significant improvements in the overall timeliness, responsiveness, and cost effectiveness of GSA’s leasing activities. We believe that a more timely, responsive, and cost-effective GSA leasing process will require fundamental changes in the traditional federal leasing paradigm, GSA’s organizational culture, and its role in meeting agencies’ office space needs. GSA will need to reengineer its leasing process and implement policies and procedures to achieve those results and improve its overall leasing performance and responsiveness. In this regard, private industry leasing practices, such as those of the 12 private firms discussed in chapter 3, deserve consideration. These leasing practices may provide ideas for streamlining and simplifying GSA’s leasing process and making it more responsive to federal agencies’ mission-support needs and a better value for taxpayers. These practices could be tested to evaluate their benefits, risks, and potential federal application. GSA could seek legislative authority from Congress to test any alternative leasing practices for which it determines that such authority would be required. Any needed changes in federal procedural requirements and controls could be tested under the managerial accountability and flexibility provisions of the Government Performance and Results Act (GPRA). Federal agencies and the commercial real estate community have an important stake in GSA’s leasing policies, procedures, and practices. They can help GSA identify key problem areas and the most critical “pain points,” design needed improvements, and test and evaluate possible solutions. In the interim, while long-term improvements are being considered and tested, GSA could delegate more of its leasing authority to other federal agencies, as it has successfully done in the buildings management area. This could help mitigate the negative effects of GSA’s monopoly by providing the stimulus of competition and alternative experiences from which GSA and other federal agencies could learn. Finally, the federal laws, regulations, and other national policies that now influence GSA’s leasing process, especially the Competition in Contracting Act (CICA) and other statutory provisions, will need to be reexamined. The recently enacted Federal Acquisition Streamlining Act provides some of the tools needed to begin reengineering GSA’s leasing activities and making them more businesslike. However, this act was not designed to and did not address all the leasing problems identified in this report. We recommend that the Administrator of GSA fully explore opportunities to simplify and streamline GSA’s leasing process and make it less costly and time consuming, more responsive to federal agencies’ mission-support needs, and a better value for taxpayers. In this regard, GSA should work closely with federal customer agencies and the commercial real estate community to more fully explore their concerns about the existing leasing process, identify alternative ways of carrying out the leasing function, and test and evaluate their use and potential adoption; test the benefits, risks, and potential federal application of the private industry leasing practices discussed in chapter 3 of this report that are within its authority and seek the necessary authority from (1) Congress to test other practices and alternatives that GSA believes would require legislation and (2) the Office of Management and Budget to test any needed changes in federal procedural requirements and controls under the managerial accountability and flexibility provisions of GPRA; and adopt administratively or, if GSA determines that legislation is needed, propose to Congress the necessary legislation to enable it to adopt those private industry practices or other alternatives tested that result in documented improvements in GSA’s leasing performance, make sense, and are cost effective. In addition, GSA should reexamine its standard lease solicitation and lease agreement clauses and provisions and eliminate any of those within its administrative authority that are no longer needed, are of questionable utility, or are seldom used; within the limitations of CICA and other statutory provisions, empower and encourage its leasing officials to modify lease clauses and provisions as necessary and negotiate aggressively with prospective landlords for bargains and concessions to obtain good, timely leasing values; adopt the private sector practice of negotiating a specified dollar per square foot tenant improvement or space build-out allowance to eliminate the uncertainties and perceived added risks associated with GSA’s existing process and help hold down leasing costs; finish developing and implement an automated realty database on commercial real estate leasing activities and rates to help leasing officials evaluate the reasonableness of landlords’ proposed offers; and establish performance goals for its leasing activities and measurement systems to track progress in meeting those goals. While long-term improvements are being considered and tested, GSA should delegate more leasing authority to federal agencies that are ready, willing, and able to lease their own office space and monitor and oversee agencies’ use of that delegated authority. In written comments dated October 19, 1994, on a draft of this report, GSA agreed with its general thrust and said that it highlights the problems that hamper effective delivery of space. Except for the recommendation on space build-out, GSA also generally agreed with the thrust of the recommendations and said it will address them as part of ongoing efforts to reengineer its overall real estate program. However, GSA said that our recommendations cannot be fully implemented unless Congress grants it an exemption from CICA and other existing statutory constraints. Finally, GSA provided comments on several statements in the draft report and updated information on its leasing program and reengineering efforts, which we have included in this report where appropriate. GSA’s written comments are reproduced in appendix I. GSA stressed that, under present law, it cannot carry out leasing as would a private sector tenant. For example, GSA emphasized that it must comply with CICA and a host of other statutory constraints and that the costs of such compliance are time and money. GSA pointed out that these costs are not borne by the private sector against which it is being compared. We agree that the federal laws, procurement regulations, and other national policies that now guide and influence GSA’s leasing process, especially CICA, will need to be reexamined. In the draft report that GSA commented on, we recognized these statutory provisions and acknowledged that administrative changes, alone, will not fully and effectively resolve the identified leasing problems or result in significant improvements in the overall timeliness, responsiveness, and cost effectiveness of GSA’s leasing activities. As a consequence, we recommended that GSA seek the necessary authority from Congress to (1) test those private industry leasing practices and other leasing alternatives that it believes would require legislation and (2) adopt those practices or other alternatives tested that result in documented leasing improvements, make sense, and are cost effective. We have retained these same recommendations in this report. Within the limitations of CICA and other statutory constraints, GSA said that it has already addressed many of the leasing problems discussed in this report and is in the process of reengineering its overall real estate program to (1) improve the quality of service to customer agencies, (2) make it easier for building owners to do business with the government, and (3) improve cost effectiveness. According to GSA, the target date for initiating its new real estate program is January 1995. In the leasing area, GSA said that several reinvention labs have been organized to test alternative ways of acquiring leased space. Also, GSA said that its reengineering efforts are emphasizing the consideration and testing of private industry practices and that, within the limitations of CICA and the other statutory and regulatory constraints, it will (1) reexamine its standard lease solicitation and lease agreement with the goal of more streamlining, (2) extend the authority of its leasing officials to modify lease clauses and provisions to get better values, (3) research the availability of data on commercial real estate leasing activities and rates for use on a nationwide or regional level, and (4) address the development of performance goals and measurement systems for its leasing activities. Concerning the recommendation on space build-out, we believe that GSA may have misunderstood our intent. Our draft report recommended that GSA adopt the private sector practice of negotiating the costs of space build-out. In its written comments on this recommendation, GSA acknowledged that it expects landlords to estimate the costs of build-out without architectural plans but said that its existing space build-out methodology limits the landlord’s risks. GSA said that it negotiates with the lessor the estimated scope and unit costs of build-out and that both of these are included in the lease. GSA said that, upon completion of build-out, the lessor is paid a lump sum amount to cover any construction build-out above the negotiated scope or the government receives a credit if the scope of build-out is less than the level provided for in the lease. According to GSA, the alternative to this methodology would be to prepare architectural plans for each offeror, which would both further slow the leasing process and add costs that could not be expected to be recovered in the lease. In recommending that GSA adopt the private sector practice of negotiating the costs of space build-out, we did not intend that GSA prepare architectural plans for each offeror or even for each space build-out requirement. As discussed in chapters 2 and 3, the typical private sector practice is to negotiate a specified dollar per square foot tenant improvement or space build-out allowance that places a limit or cap on the landlord’s share of such costs. Under this approach, the landlord is not required to estimate the actual costs associated with any specified level of build-out. Several of the commercial landlords and brokers we contacted were specifically critical of GSA’s existing process that requires landlords to estimate and bid on the costs of space build-out and said that it is one of several factors that cause GSA to pay more than necessary for leased space. These landlords and brokers said that GSA’s space build-out procedures add costs, time, and uncertainty to the leasing process and transfer risk to the landlord. To compensate for these uncertainties and perceived added risks, many of the landlords and brokers we contacted said that they increase their proposed rental rates to GSA. Accordingly, we recommended in the draft report, and continue to believe, that GSA should adopt the private sector approach to space build-out. Most commercial landlords and brokers and private sector realty managers we contacted said that the private sector approach simplifies the lease negotiation process, saves time, and helps hold down leasing costs. In view of GSA’s written comments, we have reworded this recommendation to clarify our intent. Finally, GSA’s written comments did not address our last recommendation that, while long-term improvements are being considered and tested, it delegate more leasing authority to federal agencies that are capable of and willing to lease their own space. In subsequent discussions, responsible GSA officials said that GSA declines to take a position on this recommendation at this time. According to these officials, GSA will (1) take a position on this recommendation after it has implemented its new real estate program in January 1995 and (2) include that position in its formal response to the House Committee on Government Reform and Oversight, Senate Committee on Governmental Affairs, and House and Senate Committees on Appropriations on this report. It should be noted that the NPR report on Reinventing Support Services recommended that GSA delegate to all federal agencies the authority to lease their own general-purpose space as part of giving agencies greater authority to choose their sources of real property services.
Pursuant to a congressional request, GAO reviewed the General Services Administration's (GSA) policies, procedures, and practices for leasing office space, focusing on how they compare with private industry leasing practices. GAO found that: (1) GSA has a highly prescriptive and process-oriented leasing approach which prevents it from timely securing good values on office space leases and impedes its ability to deliver office space to federal agencies; (2) federal procurement laws and procedural controls enacted to ensure leasing uniformity, compliance, and fairness unduly restrict GSA flexibility; (3) GSA may be paying too much for leased space because of its confusing and time-consuming solicitations and standard contracts; (4) the 12 private-sector firms reviewed use a simpler, more flexible, results-oriented leasing approach that reduces their leasing costs; (5) the private firms do not use detailed specifications and contracts or require multilevel reviews of proposed leases; (6) the firms rely on their staff's expertise, conform to customary commercial practices, and assume more leasing risks; (7) GSA has streamlined its procedures for small leases and is exploring other leasing alternatives in response to initiatives to reduce the government's size and improve performance; (8) GSA could make other administrative changes to improve timeliness and reduce costs, but significant improvements will require fundamental changes in its leasing approach, organizational culture, and role in meeting federal office needs; and (9) federal procurement laws and regulations and other national policies need to be reexamined and possibly changed in order for GSA to adopt a more results-oriented leasing approach.
For over 2 decades, we have reported on problems with DOD’s personnel security clearance program as well as the financial costs and risks to national security resulting from these problems (see Related GAO Reports at the end of this statement). For example, at the turn of the century, we documented problems such as incomplete investigations, inconsistency in determining eligibility for clearances, and a backlog of overdue clearance reinvestigations that exceeded 500,000 cases. More recently in 2004, we identified continuing and new impediments hampering DOD’s clearance program and made recommendations for increasing the effectiveness and efficiency of the program. Also in September 2004 and June and November 2005, we testified before this Subcommittee on clearance- related problems faced governmentwide, DOD-wide, and for industry personnel in particular. A critical step in the federal government’s efforts to protect national security is to determine whether an individual is eligible for a personnel security clearance. Specifically, an individual whose job requires access to classified information must undergo a background investigation and adjudication (determination of eligibility) in order to obtain a clearance. As with federal government workers, the demand for personnel security clearances for industry personnel has increased during recent years. Additional awareness of threats to our national security since September 11, 2001, and efforts to privatize federal jobs during the last decade are but two of the reasons for the greater number of industry personnel needing clearances today. As of September 30, 2003, industry personnel held about one-third of the approximately 2 million DOD-issued clearances. DOD’s Office of the Under Secretary of Defense for Intelligence has overall responsibility for DOD clearances, and its responsibilities also extend beyond DOD. Specifically, that office’s responsibilities include obtaining background investigations and adjudicating clearance eligibility for industry personnel in more than 20 other federal agencies, as well as the clearances of staff in the federal government’s legislative branch. Problems in the clearance program can negatively affect national security. For example, delays reviewing security clearances for personnel who are already doing classified work can lead to a heightened risk of disclosure of classified information. In contrast, delays in providing initial security clearances for previously non cleared personnel can result in other negative consequences, such as additional costs and delays in completing national security-related contracts, lost-opportunity costs, and problems retaining the best qualified personnel. Long-standing delays in completing hundreds of thousands of clearance requests for servicemembers, federal employees, and industry personnel as well as numerous impediments that hinder DOD’s ability to accurately estimate and eliminate its clearance backlog led us to declare the program a high-risk area in January 2005. The 25 areas on our high-risk list at that time received their designation because they are major programs and operations that need urgent attention and transformation in order to ensure that our national government functions in the most economical, efficient, and effective manner possible. Shortly after we placed DOD’s clearance program on our high-risk list, a major change in DOD’s program occurred. In February 2005, DOD transferred its personnel security investigations functions and about 1,800 investigative positions to the Office of Personnel Management (OPM). Now, DOD obtains nearly all of its clearance investigations from OPM, which is currently responsible for 90 percent of the personnel security clearance investigations in the federal government. DOD retained responsibility for adjudication of military personnel, DOD civilians, and industry personnel. Other recent significant events affecting DOD’s clearance program have been the passage of the Intelligence Reform and Terrorism Prevention Act of 2004 and the issuance of the June 2005 Executive Order 13381, “Strengthening Processes Relating to Determining Eligibility for Access to Classified National Security Information.” The act included milestones for reducing the time to complete clearances, general specifications for a database on security clearances, and requirements for greater reciprocity of clearances (the acceptance of a clearance and access granted by another department, agency, or military service). Among other things, the executive order resulted in the Office of Management and Budget (OMB) taking a lead role in preparing a strategic plan to improve personnel security clearance processes governmentwide. Using the context that I have laid out for understanding the interplay between DOD and OPM in DOD’s personnel security clearance processes, I will address three issues. First, I will provide a status update and preliminary observations from our ongoing audit on the timeliness and completeness of the processes used to determine whether industry personnel are eligible to hold a top secret clearance—an audit that this Subcommittee requested. Second, I will discuss potential adverse effects that might result from the July 1, 2006, expiration of Executive Order 13381. Finally, I will discuss DOD’s recent action to suspend the processing of clearance requests for industry personnel. With the exception of the update and preliminary observations on our current audit, my comments today are based primarily on our completed work and our institutional knowledge from our prior reviews of the clearance processes used by DOD and, to a lesser extent, other agencies. In addition, we used information from the Intelligence Reform and Terrorism Prevention Act of 2004, executive orders, and other documents such as a memorandum of agreement between DOD and OPM. We conducted our work in accordance with generally accepted government auditing standards in May 2006. Mr. Chairman, at your and other congressional members request, we continue to examine the timeliness and completeness of the processes used to determine whether industry personnel are eligible to hold a top secret clearance. Two key elements of the security clearance process are investigation and adjudication. In the investigation portion of the security clearance process, the investigator seeks to obtain information pertaining to the security clearance applicant’s loyalty, character, reliability, trustworthiness, honesty, and financial responsibility. For top secret security clearances, the types or sources of information include an interview with the subject of the investigation, national agency checks (e.g., Federal Bureau of Investigations and immigration records), local agency checks (e.g., municipal police and court records), financial checks, birth date and place, citizenship, education, employment, public records for information such as bankruptcy or divorce, and interviews with references. In the adjudication portion of the security clearance process, government employees in 10 DOD adjudication facilities—2 of which serve industry—use the information gathered at the investigation stage to approve, deny, or revoke eligibility to access classified information. Once adjudicated, the security clearance is then issued up to the appropriate eligibility level, or alternative actions are taken if eligibility is denied or revoked. A major part of our audit is reviewing fully adjudicated industry cases to determine the completeness of both the investigations and the adjudications for top secret clearances. We will complete this audit and issue a report to your Subcommittee and other congressional requesters this fall. I will briefly mention three of the preliminary observations that we have been able to derive thus far from our audit. Communication problems may be limiting governmentwide efforts to improve the personnel security clearance process. The billing dispute that I discuss later in this testimony is one example of a communication breakdown. In addition, until recently, OPM had not officially shared its investigator’s handbook with DOD adjudicators. Adjudicators raised concerns that without knowing what was required for an investigation by the investigator’s handbook, they could not fully understand how investigations were conducted and the investigative reports that form the basis for their adjudicative decisions. OPM indicates that it is revising the investigator’s handbook and is obtaining comments from DOD and other customers. OPM acknowledges that despite its significant effort to develop a domestic investigative workforce, performance problems remain because of the workforce’s inexperience. OPM reports that they are making progress in hiring and training new investigators, however, they have also noted that it will take a couple of years for the investigative workforce to reach desired performance levels. In addition, OPM is still in the process of developing a foreign presence to investigate leads overseas. OPM also reports that it is making progress in establishing an overseas presence, but that it will take time to fully meet the demand for overseas investigative coverage. Some DOD adjudication facilities have stopped accepting closed pending cases—investigations forwarded to adjudicators even though some required information is not included—from OPM. DOD adjudication officials need all of the required investigative information in order to determine clearance eligibility. Without complete investigative information, DOD adjudication facilities must store the hard-copy closed pending case files until the required additional information is provided by OPM. According to DOD officials, this has created a significant administrative burden. The July 1, 2006, expiration of Executive Order 13381 could slow improvements in personnel security clearance processes governmentwide as well as for DOD in particular. Among other things, this new executive order delegated responsibility for improving the clearance process to the OMB Director from June 30, 2005, to July 1, 2006. We have been encouraged by the high level of commitment that OMB demonstrated in the development of a plan to improve the personnel security clearance process governmentwide. Also, the OMB Deputy Director met with GAO officials to discuss OMB’s general strategy for addressing the problems that led to our high-risk designation for DOD’s clearance program. Demonstrating strong management commitment and top leadership support to address a known risk is one of the requirements for removing DOD’s clearance program from GAO’s high-risk list. Because there has been no indication that the executive order will be extended, we are concerned about whether such progress will continue without OMB’s high-level management involvement. While OPM has provided some leadership in assisting OMB with the development of the governmentwide plan, OPM may not be in a position to assume additional high-level commitment for a variety of reasons if OMB does not continue in its current role. These reasons include: (1) the governmentwide plan lists many management challenges facing OPM and the Associate Director of its investigations unit, such as establishing a presence to conduct overseas investigations and adjusting its investigative workforce to the increasing demand for clearances; (2) adjudication of personnel security clearances and determination of which organizational positions require such clearances is not an OPM responsibility; and (3) agencies’ disputes with OPM—such as the current billing dispute with DOD—may need a high-level, impartial third party to mediate a resolution. DOD stopped processing applications for clearances for industry personnel on April 28, 2006. DOD attributed its actions to an overwhelming volume of requests for industry personnel security investigations and funding constraints. The unexpected volume of security clearance requests resulted in DOD having to halt the processing of industry security clearances. We have testified repeatedly that a major impediment to providing timely clearances is DOD’s inaccurately projected number of requests for security clearances DOD-wide and for industry personnel specifically. DOD’s inability to accurately project clearance requirements makes it difficult to determine clearance-related budgets and staffing. In fiscal year 2001, DOD received 18 percent fewer requests than it projected (about 150,000); and in fiscal years 2002 and 2003, it received 19 and 13 percent (about 135,000 and 90,000), respectively, more requests than projected. In 2005, DOD was again uncertain about the number and level of clearances that it required, but the department reported plans and efforts to identify clearance requirements for servicemembers, civilian employees, and contractors. For example, in response to our May 2004 recommendation to improve the projection of clearance requests for industry personnel, DOD indicated that it is developing a plan and computer software to have the government’s contracting officers (1) authorize the number of industry personnel clearance investigations required to perform the classified work on a given contract and (2) link the clearance investigations to the contract number. An important consideration in understanding the funding constraints that contributed to the stoppage is a DOD-OPM billing dispute, which has resulted in the Under Secretary of Defense for Intelligence requesting OMB mediation. The dispute stems from the February 2005 transfer of DOD’s personnel security investigations function to OPM. The memorandum of agreement signed by the OPM Director and the DOD Deputy Secretary prior to the transfer lists many types of costs that DOD may incur for up to 3 years after the transfer of the investigations function to OPM. One cost, an adjustment to the rates charged to agencies for clearance investigations, provides that “OPM may charge DOD for investigations at DOD’s current rates plus annual price adjustments plus a 25 percent premium to offset potential operating losses. OPM will be able to adjust, at any point of time during the first three year period after the start of transfer, the premium as necessary to cover estimated future costs or operating losses, if any, or offset gains, if any.” The Under Secretary’s memorandum says that OPM has collected approximately $50 million in premiums in addition to approximately $144 million for other costs associated with the transfer. The OPM Associate Director subsequently listed costs that OPM has incurred. To help resolve this billing matter, DOD requested mediation from OMB, in accordance with the memorandum of agreement between DOD and OPM. Information from DOD and OPM indicates that OMB subsequently directed the two agencies to continue to work together to resolve the matter on their own. According to representatives from DOD and OPM inspector general offices, they are currently investigating all of the issues raised in the Under Secretary’s and Associate Director’s correspondences and have indicated that they intend to issue reports on their reviews during the summer. Mr. Chairman, I want to assure you that we will continue taking multiple steps to assess and monitor DOD’s personnel security clearance program. As I have discussed, we are currently reviewing the timeliness and completeness of the processes used to determine whether industry personnel are eligible to hold a top secret clearance. We will report that information to your Subcommittee this fall. Also, our standard steps of monitoring programs on our high-risk list require that we evaluate the progress that agencies make toward being removed from GAO’s high-risk list. Finally, we continuously monitor our recommendations to agencies to determine whether active steps are being taken to overcome program deficiencies. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. For further information regarding this testimony, please contact me at 202- 512-5559 or [email protected]. Individuals making key contributions to this testimony include Jack E. Edwards, Assistant Director; Jerome Brown; Kurt A. Burgeson; Susan C. Ditto; David Epstein; Sara Hackley; James Klein; and Kenneth E. Patton. Managing Sensitive Information: Departments of Energy and Defense Policies and Oversight Could Be Improved. GAO-06-369. Washington, D.C.: March 7, 2006. Managing Sensitive Information: DOE and DOD Could Improve Their Policies and Oversight. GAO-06-531T. Washington, D.C.: March 14, 2006. GAO’s High-Risk Program. GAO-06-497T. Washington, D.C.: March 15, 2006. Questions for the Record Related to DOD’s Personnel Security Clearance Program and the Government Plan for Improving the Clearance Process. GAO-06-323R. Washington, D.C.: January 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD’s Program, But Concerns Remain. GAO-06- 233T. Washington, D.C.: November 9, 2005. Defense Management: Better Review Needed of Program Protection Issues Associated with Manufacturing Presidential Helicopters. GAO-06- 71SU. Washington, D.C.: November 4, 2005. DOD’s High-Risk Areas: High-Level Commitment and Oversight Needed for DOD Supply Chain Plan to Succeed. GAO-06-113T. Washington, D.C.: October 6, 2005. Questions for the Record Related to DOD’s Personnel Security Clearance Program. GAO-05-988R. Washington, D.C.: August 19, 2005. Industrial Security: DOD Cannot Ensure Its Oversight of Contractors under Foreign Influence Is Sufficient. GAO-05-681. Washington, D.C.: July 15, 2005. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO’s High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. Defense Management: Key Elements Needed to Successfully Transform DOD Business Operations. GAO-05-629T. Washington, D.C.: April 28, 2005. Maritime Security: New Structures Have Improved Information Sharing, but Security Clearance Processing Requires Further Attention. GAO-05-394. Washington, D.C.: April 15, 2005. DOD’s High-Risk Areas: Successful Business Transformation Requires Sound Strategic Planning and Sustained Leadership. GAO-05-520T. Washington, D.C.: April 13, 2005. GAO’s 2005 High-Risk Update. GAO-05-350T. Washington, D.C.: February 17, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. Intelligence Reform: Human Capital Considerations Critical to 9/11 Commission’s Proposed Reforms. GAO-04-1084T. Washington, D.C.: September 14, 2004. DOD Personnel Clearances: Additional Steps Can Be Taken to Reduce Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-632. Washington, D.C.: May 26, 2004. DOD Personnel Clearances: Preliminary Observations Related to Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-202T. Washington, D.C.: May 6, 2004. Security Clearances: FBI Has Enhanced Its Process for State and Local Law Enforcement Officials. GAO-04-596. Washington, D.C.: April 30, 2004. Industrial Security: DOD Cannot Provide Adequate Assurances That Its Oversight Ensures the Protection of Classified Information. GAO-04-332. Washington, D.C.: March 3, 2004. DOD Personnel Clearances: DOD Needs to Overcome Impediments to Eliminating Backlog and Determining Its Size. GAO-04-344. Washington, D.C.: February 9, 2004. Aviation Security: Federal Air Marshal Service Is Addressing Challenges of Its Expanded Mission and Workforce, but Additional Actions Needed. GAO-04-242. Washington, D.C.: November 19, 2003. Results-Oriented Cultures: Creating a Clear Linkage between Individual Performance and Organizational Success. GAO-03-488. Washington, D.C.: March 14, 2003. Defense Acquisitions: Steps Needed to Ensure Interoperability of Systems That Process Intelligence Data. GAO-03-329. Washington D.C.: March 31, 2003. Managing for Results: Agency Progress in Linking Performance Plans With Budgets and Financial Statements. GAO-02-236. Washington D.C.: January 4, 2002. Central Intelligence Agency: Observations on GAO Access to Information on CIA Programs and Activities. GAO-01-975T. Washington, D.C.: July 18, 2001. Determining Performance and Accountability Challenges and High Risks. GAO-01-159SP. Washington, D.C.: November 2000. DOD Personnel: More Consistency Needed in Determining Eligibility for Top Secret Clearances. GAO-01-465. Washington, D.C.: April 18, 2001. DOD Personnel: More Accurate Estimate of Overdue Security Clearance Reinvestigations Is Needed. GAO/T-NSIAD-00-246. Washington, D.C.: September 20, 2000. DOD Personnel: More Actions Needed to Address Backlog of Security Clearance Reinvestigations. GAO/NSIAD-00-215. Washington, D.C.: August 24, 2000. Security Protection: Standardization Issues Regarding Protection of Executive Branch Officials. GAO/T-GGD/OSI-00-177. Washington, D.C.: July 27, 2000. Security Protection: Standardization Issues Regarding Protection of Executive Branch Officials. GAO/GGD/OSI-00-139. Washington, D.C.: July 11, 2000. Computer Security: FAA Is Addressing Personnel Weaknesses, But Further Action Is Required. GAO/AIMD-00-169. Washington, D.C.: May 31, 2000. DOD Personnel: Weaknesses in Security Investigation Program Are Being Addressed. GAO/T-NSIAD-00-148. Washington, D.C.: April 6, 2000. DOD Personnel: Inadequate Personnel Security Investigations Pose National Security Risks. GAO/T-NSIAD-00-65. Washington, D.C.: February 16, 2000. DOD Personnel: Inadequate Personnel Security Investigations Pose National Security Risks. GAO/NSIAD-00-12. Washington, D.C.: October 27, 1999. Background Investigations: Program Deficiencies May Lead DEA to Relinquish Its Authority to OPM. GAO/GGD-99-173. Washington, D.C.: September 7, 1999. Department of Energy: Key Factors Underlying Security Problems at DOE Facilities. GAO/T-RCED-99-159. Washington, D.C.: April 20, 1999. Performance Budgeting: Initial Experiences Under the Results Act in Linking Plans With Budgets. GAO/AIMD/GGD-99-67. Washington, D.C.: April 12, 1999. Military Recruiting: New Initiatives Could Improve Criminal History Screening. GAO/NSIAD-99-53. Washington, D.C.: February 23, 1999. Executive Office of the President: Procedures for Acquiring Access to and Safeguarding Intelligence Information. GAO/NSIAD-98-245. Washington, D.C.: September 30, 1998. Inspectors General: Joint Investigation of Personnel Actions Regarding a Former Defense Employee. GAO/AIMD/OSI-97-81R. Washington, D.C.: July 10, 1997. Privatization of OPM’s Investigations Service. GAO/GGD-96-97R. Washington, D.C.: August 22, 1996. Cost Analysis: Privatizing OPM Investigations. GAO/GGD-96-121R. Washington, D.C.: July 5, 1996. Personnel Security: Pass and Security Clearance Data for the Executive Office of the President. GAO/NSIAD-96-20. Washington, D.C.: October 19, 1995. Privatizing OPM Investigations: Implementation Issues. GAO/T-GGD-95- 186. Washington, D.C.: June 15, 1995. Privatizing OPM Investigations: Perspectives on OPM’s Role in Background Investigations. GAO/T-GGD-95-185. Washington, D.C.: June 14, 1995. Security Clearances: Consideration of Sexual Orientation in the Clearance Process. GAO/NSIAD-95-21. Washington, D.C.: March 24, 1995. Background Investigations: Impediments to Consolidating Investigations and Adjudicative Functions. GAO/NSIAD-95-101. Washington, D.C.: March 24, 1995. Managing DOE: Further Review Needed of Suspensions of Security Clearances for Minority Employees. GAO/RCED-95-15. Washington, D.C.: December 8, 1994. Personnel Security Investigations. GAO/NSIAD-94-135R. Washington, D.C.: March 4, 1994. Classified Information: Costs of Protection Are Integrated With Other Security Costs. GAO/NSIAD-94-55. Washington, D.C.: October 20, 1993. Nuclear Security: DOE’s Progress on Reducing Its Security Clearance Work Load. GAO/RCED-93-183. Washington, D.C.: August 12, 1993. Personnel Security: Efforts by DOD and DOE to Eliminate Duplicative Background Investigations. GAO/RCED-93-23. Washington, D.C.: May 10, 1993. Administrative Due Process: Denials and Revocations of Security Clearances and Access to Special Programs. GAO/T-NSIAD-93-14. Washington, D.C.: May 5, 1993. DOD Special Access Programs: Administrative Due Process Not Provided When Access Is Denied or Revoked. GAO/NSIAD-93-162. Washington, D.C.: May 5, 1993. Security Clearances: Due Process for Denials and Revocations by Defense, Energy, and State. GAO/NSIAD-92-99. Washington, D.C.: May 6, 1992. Due Process: Procedures for Unfavorable Suitability and Security Clearance Actions. GAO/NSIAD-90-97FS. Washington, D.C.: April 23, 1990. Weaknesses in NRC’s Security Clearance Program. GAO/T-RCED-89-14. Washington, D.C.: March 15, 1989. Nuclear Regulation: NRC’s Security Clearance Program Can Be Strengthened. GAO/RCED-89-41. Washington, D.C.: December 20, 1988. Nuclear Security: DOE Actions to Improve the Personnel Clearance Program. GAO/RCED-89-34. Washington, D.C.: November 9, 1988. Nuclear Security: DOE Needs a More Accurate and Efficient Security Clearance Program. GAO/RCED-88-28. Washington, D.C.: December 29, 1987. National Security: DOD Clearance Reduction and Related Issues. GAO/NSIAD-87-170BR. Washington, D.C.: September 18, 1987. Oil Reserves: Proposed DOE Legislation for Firearm and Arrest Authority Has Merit. GAO/RCED-87-178. Washington, D.C.: August 11, 1987. Embassy Blueprints: Controlling Blueprints and Selecting Contractors for Construction Abroad. GAO/NSIAD-87-83. Washington, D.C.: April 14, 1987. Security Clearance Reinvestigations of Employees Has Not Been Timely at the Department of Energy. GAO/T-RCED-87-14. Washington, D.C.: April 9, 1987. Improvements Needed in the Government’s Personnel Security Clearance Program. Washington, D.C.: April 16, 1985. Need for Central Adjudication Facility for Security Clearances for Navy Personnel. GAO/GGD-83-66. Washington, D.C.: May 18, 1983. Effect of National Security Decision Directive 84, Safeguarding National Security Information. GAO/NSIAD-84-26. Washington, D.C.: October 18, 1983. Faster Processing of DOD Personnel Security Clearances Could Avoid Millions in Losses. GAO/GGD-81-105. Washington, D.C.: September 15, 1981. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Defense (DOD) is responsible for about 2 million active personnel security clearances. About one-third of the clearances are for industry personnel working on contracts for DOD and more than 20 other executive agencies. Delays in determining eligibility for a clearance can heighten the risk that classified information will be disclosed to unauthorized sources and increase contract costs and problems attracting and retaining qualified personnel. Long-standing delays in completing hundreds of thousands of clearance requests and numerous impediments that hinder DOD's ability to accurately estimate and eliminate its clearance backlog led GAO to declare DOD's personnel security clearance program a high-risk area in January 2005. This testimony presents GAO's (1) preliminary observations from its ongoing review of the timeliness and completeness of clearances, (2) concerns about the upcoming expiration of an executive order that has resulted in high level commitment to improving the governmentwide clearance process, and (3) views on factors underlying DOD's decision to stop accepting clearance requests for industry personnel. GAO's ongoing review of the timeliness and completeness of security clearance processes for industry personnel has provided three preliminary observations. First, communication problems between DOD and the Office of Personnel Management (OPM) may be limiting governmentwide efforts to improve the personnel security clearance process. Second, OPM faces performance problems due to the inexperience of its domestic investigative workforce, and it is still in the process of developing a foreign presence to investigate leads overseas. Third, some DOD adjudication facilities have stopped accepting closed pending cases--that is, investigations formerly forwarded to DOD adjudicators from OPM--even though some required investigative information was not included. In addition, the expiration of Executive Order 13381 could slow improvements in the security clearance processes governmentwide, as well as for DOD in particular. The executive order, which among other things delegated responsibility for improving the clearance process to the Office of Management and Budget (OMB), is set to expire on July 1, 2006. GAO has been encouraged by the high level of commitment that OMB has demonstrated in the development of a plan to address clearance-related problems. Because there has been no indication that the executive order will be extended, GAO is concerned about whether the progress that has resulted from OMB's high-level management involvement will continue. Issues such as OPM's need to establish an overseas presence are discussed as potential reasons why OPM may not be in a position to assume an additional high-level commitment if OMB does not continue in its current role. Finally, inaccurate projections of clearance requests and funding constraints are delaying the processing of security clearance requests for industry personnel. DOD stopped processing new applications for clearance investigations for industry personnel on April 28, 2006. DOD attributed its actions, in part, to an overwhelming volume of requests for industry personnel security investigations. DOD's long-standing inability to accurately project its security clearance workload makes it difficult to determine clearance-related budgets and staffing requirements. The funding constraints that also underlie the stoppage are related to the transfer of DOD's personnel security investigations functions to OPM. DOD has questioned some of the costs being charged by OPM and has asked OMB to mediate the DOD-OPM dispute. Information from the two agencies indicates that OMB has directed the agencies to continue to work together to resolve the matter. According to officials in the DOD and OPM inspector general offices, they are investigating the billing dispute and expect to report on the results of their investigations this summer.
Access to credit and investment capital is essential for creating and retaining jobs, developing affordable housing, revitalizing neighborhoods, and promoting the development and growth of small businesses. Over the past three decades, community-based financial institutions have demonstrated that strategic lending and investment activities tailored to the unique characteristics of underserved communities can be effective in improving the economic well-being of these communities and of the people who live in them. To help create new and expand existing financial institutions that specialize in serving distressed communities, the Congress, in the Community Development Banking and Financial Institutions Act of 1994, established the Community Development Financial Institutions (CDFI) Fund. In creating the Fund, the Congress recognized that the number of CDFIs was limited, most were small, and many were having difficulty raising capital needed to meet the demands for their products and services. The CDFI Fund currently provides needed capital primarily through two programs—the CDFI program, which makes awards directly to qualifying CDFIs, and the Bank Enterprise Award (BEA) program, which provides awards to insured depository institutions for investing in CDFIs and/or distressed communities. This report responds to a requirement in the 1994 legislation that we report on the Fund’s structure, governance, and performance 30 months after the appointment of an Administrator for the Fund. However, as agreed with your offices, the report does not discuss the structure and governance of the Fund because the Department of the Treasury’s Inspector General is conducting an audit addressing these issues. Our report evaluates how well the Fund is meeting its performance goals and objectives through the CDFI and BEA programs and considers opportunities for improving their implementation. In addition, because this report focuses on the Fund’s performance, it reviews the strategic plan developed under the Government Performance and Results Act of 1993 (Results Act) to guide the Fund’s other activities. The 1994 act established the CDFI Fund as a wholly owned government corporation. Subsequent legislation placed the Fund within the Department of the Treasury and gave the Secretary of the Treasury all of the powers and rights to manage the Fund, as set forth in the authorizing legislation. Figure 1.1 displays the Fund’s organization. The 1994 legislation also created an Advisory Board to advise the Fund on policy issues. The Fund’s Advisory Board consists of 15 members, including representatives from the departments of Agriculture, Commerce, Housing and Urban Development, the Interior, and the Treasury and from the Small Business Administration, as well as nine private citizens appointed by the President. From fiscal year 1995 through fiscal year 1998, the Fund received appropriations totaling $225 million. The Fund’s appropriations can be obligated over 2 years; therefore, the total budget authority available to the Fund in any given year is the current year’s appropriation plus any unspent funding from the previous year’s appropriation. Figure 1.2 shows the Fund’s appropriations by fiscal year. Of the amounts appropriated to the Fund, not more than $5.5 million may be used in a single fiscal year to pay the Fund’s administrative costs and expenses. The remaining appropriations have been committed primarily to the CDFI and BEA programs. Under the act, one-third of the amounts appropriated for programs in any fiscal year must be made available to the BEA program. The fiscal year 1995 budget proposed an estimated 30 full-time-equivalent staff positions for the Fund. However, the 1995 Rescissions Act limited the Fund to 10 full-time-equivalent positions in fiscal year 1995. During fiscal year 1997, the Fund’s staffing increased to 14 full-time-equivalent positions. For fiscal year 1998, the Fund is authorized 35 full-time-equivalent positions, and the Fund’s management intends to fill all of the vacant positions by the end of the fiscal year. The Fund’s administrative budget remains capped at $5.5 million. Under the CDFI program, the Fund invests in CDFIs to promote their long-term ability to serve economically distressed communities. Specifically, the Fund provides financial and technical assistance to CDFIs to enhance their ability to make loans and investments and to provide services for economically distressed communities, targeted populations, or both. CDFIs are private profit-making and nonprofit financial institutions that focus on providing financial services to distressed geographic areas and populations that are underserved by conventional lenders and investors. The following are among the types of organizations that qualify for funding under the program: Community Development Bank: A community development organization centered around a bank or savings and loan that combines the structure and expertise of a profit-making financial institution with a commitment to a distressed place or population. Community Development Credit Union: A financial cooperative owned and operated by lower-income persons. It provides financial services to its members, including savings and checking accounts and loans for homes, cars, or other personal needs. Nonprofit Community Development Loan Fund: A financial intermediary that raises capital from individuals and institutional investors, churches, businesses, and foundations, at below-market rates, and relends these funds primarily to community-based organizations and businesses and nonprofit developers in low-income urban or rural communities. Microenterprise Loan Fund: An entity that receives funding from a private or nonprofit foundation, government agency, or private bank and generally provides technical assistance and loans, ranging from as little as $500 to $10,000, to start up or expand self-help business opportunities for low-income individuals. Community Development Venture Capital Fund: An entity that provides managerial support, along with equity and debt with equity features, to businesses (typically manufacturing-based) located in low-income communities. Although two types of CDFIs—community development banks and credit unions—are regulated and insured by the Federal Deposit Insurance Corporation and the National Credit Union Share Insurance Fund, respectively, the remaining types of CDFIs are generally unregulated. Each year, the Fund solicits applications for awards from CDFIs seeking assistance. An organization’s application must include a comprehensive business plan covering 5 years or more and describing how the organization plans to meet the needs of its investment area, its targeted population, or both. While an applicant is given considerable flexibility in designating an investment area, the area must meet objective criteria for economic distress developed by the Fund. An investment area may include a variety of geographic units reflecting the neighborhoods, areas, or markets that the applicant serves or proposes to serve. The Fund awards assistance competitively, using selection criteria that include the quality of an applicant’s business plan, the extent and nature of the plan’s impact on community development, and the experience and background of the applicant’s management team. This assistance may be in the form of, for instance, an equity investment, a grant, a loan, and/or technical assistance. In the CDFI program, there were 31 awardees in fiscal year 1996 and 48 awardees in fiscal year 1997. To receive any financial assistance through the CDFI program, a CDFI must be certified by the Fund, obtain matching funding from nonfederal sources, and enter into an assistance agreement with the Fund. The Fund certifies a CDFI after determining, among other things, that it has a primary mission of promoting community development, its predominant business activity is lending or investing in development, and it serves (an) economically distressed investment area(s) or targeted population(s). The Fund’s certification signifies that a CDFI is eligible to participate in the CDFI program, but it does not constitute an opinion on the CDFI’s financial viability or indicate that the CDFI will receive an award. While the program’s regulations allow an uncertified CDFI to apply and be selected for an award, the CDFI will not receive financial assistance until it has been certified. The Fund requires CDFIs to be recertified every 2 years. As of March 1998, the Fund had certified 205 CDFIs. While the total number of CDFIs nationwide that could be certified by the Fund is unknown, GAO surveyed 925 community development organizations that described their organization as a type of CDFI (see app. I). To meet the requirement for matching funds, an awardee must obtain assistance from nonfederal sources that is at least equal in form and value to the Fund’s federal assistance. The awardee must provide the Fund with written documentation that it has received either a firm commitment from the provider of the matching funds or the matching funds themselves. This provision is intended to ensure that no federal funds are released until other resources have been leveraged. After selecting the awardees for a given funding round, the Fund begins negotiating an assistance agreement with each awardee. When both parties have agreed upon all of the elements, the Fund and the awardee enter into an assistance agreement by signing the negotiated document. Each agreement is tailored to the nature of the CDFI—regulated or unregulated—and the type of assistance given to the awardee. The Fund generally disburses funds after an agreement has been signed and the awardee has met the requirements for matching funds and certification. A key negotiated element of the assistance agreement is a performance schedule for the Fund to use in evaluating the awardee’s performance. This schedule includes performance goals and measures, benchmarks, and an expected evaluation date. Each agreement also requires the CDFI to provide quarterly and annual reports on its financial condition and progress toward meeting its performance goals. In addition, the agreement allows the Fund to apply sanctions, or remedies, that range from changing the goals to requiring the awardee to repay the award if it does not achieve at least a satisfactory level of performance by the evaluation date. Each year, the Fund must evaluate each awardee’s performance using the performance goals, measures, and benchmarks in the awardee’s assistance agreement. The Fund describes performance goals as qualitative goals and measures as quantitative indicators of the extent to which the awardee has achieved the goals. Benchmarks define levels of performance, ranging from outstanding to unacceptable, which are assessed at specific dates. Awardees often attach “assumptions” to their goals and measures to identify factors beyond their control, such as the continuation of external funding, that may affect the achievement of their goals. The purpose of the BEA program is to encourage insured depository institutions to increase (1) their investments in certified CDFIs and (2) their lending and provision of other financial services in economically distressed communities. According to the Department of the Treasury, there are significant gaps in these communities’ access to capital and the market potential in them is one that banks often do not recognize or understand. By creating an incentive for banks to increase activities in these communities, the BEA program seeks to make it easier for mainstream sources of capital to serve low-income people. To encourage increased investment, the BEA program rewards banks on the basis of the amount by which they increase their investments and other financial services over a 6-month assessment period (compared with the preceding 6-month, or baseline, period). A bank applying for an award must demonstrate that it plans to invest in or otherwise support a CDFI that is certified by the Fund and/or that the lending and other financial services it plans to undertake are otherwise eligible under the BEA program’s guidelines and regulations. These guidelines and regulations spell out (1) the kinds of services for which the program will grant an award, such as business, consumer, agricultural, or single-family mortgage lending, and (2) the economic and other characteristics that define an area as a “distressed” community, which all of the banks’ rewarded increases in activity must serve. Investments in certified CDFIs are eligible for awards of up to 33 percent of the amount by which banks increase their investments, whereas lending and other services in distressed communities are eligible for awards of 5 percent of the increased investment. In its application for an award, a bank may include not only its plans for specific investments in CDFIs, loans, or services that it knows it will undertake but also good-faith estimates of eligible activities that it expects to undertake. As long as the Fund (1) has determined that banks have met all of its criteria for awards from the BEA program and (2) has sufficient BEA funds available, it announces that the banks that have applied successfully have won awards. However, the Fund does not disburse a bank’s award immediately. Instead, it disburses the award funds to the extent that the bank provides adequate documentation and other assurances that it has completed the increased investments and/or activities for which it received the award. The Fund disburses award funds in increments when a bank completes its increased activities in increments. For example, if a bank increased its investment in a certified CDFI in two equal installments, it could seek and the Fund would disburse half of its award after the first installment; the Fund would retain the second half until the bank had completed the second installment. For the 1997 awardees, the Fund gives an award on the basis of a bank’s commitment to increase an activity over a period of up to 3 years so long as the bank makes that commitment and begins to fund it during the 6-month assessment period. As a result, the Fund may take up to 3 years after approving an award to disburse all of the award. Once a bank has received its award funds, it may do anything it wants with them because the legislation creating the program places no restrictions on their use. The Fund has no authority, for example, to require banks to invest their award funds for specific purposes or to report on the uses to which they have put the funds. In the 1990s, the Congress established a statutory framework to address long-standing weaknesses in federal operations, improve federal management practices, and provide greater accountability for achieving results. This framework included as its essential elements the Results Act and key financial management and information technology reform legislation: the Chief Financial Officers Act of 1990—as expanded by the Government Management Reform Act of 1994—and the Paperwork Reduction Act of 1995 and the Clinger-Cohen Act of 1996, respectively. Taken together, these legislative initiatives seek to respond to a need for accurate, reliable, and integrated budget, financial, and program information for congressional and executive branch decision-making. The goal-setting and performance measurement and improvement system envisioned by the Results Act is the centerpiece of this framework and starts with a requirement that each executive agency develop and periodically update a strategic plan covering a period of at least 5 years. This strategic plan is to include elements such as the agency’s mission statement, long-term goals and objectives, and strategies for achieving these goals and objectives. Under the Results Act, the first of these plans was due by September 30, 1997. With the submission of the fiscal year 1999 budget, agencies are also required to prepare annual performance plans that establish connections between the long-term strategic goals outlined in the strategic plans. Finally, the act requires that each agency report annually on the extent to which it is meeting its annual performance goals and the actions needed to achieve or modify any goals that have not been met. The first of these reports, on programs’ performance for fiscal year 1999, is due by March 31, 2000. The Fund is required by the Department of the Treasury to provide the Secretary with a strategic plan that complies with the provisions of the Results Act. The Fund submitted its plan on September 23, 1997. Our objectives for this assignment were to evaluate (1) the progress of the CDFI Fund in developing performance measures for awardees in the CDFI program and systems to monitor and evaluate their progress in meeting their performance goals, as well as the accomplishments they have reported to date; (2) the performance of banks under the BEA program, the impact of the program on banks’ investment activities and on economically distressed communities, and the uses to which banks have put their award funds; and (3) the Fund’s progress in meeting the Results Act’s requirements for strategic planning and the steps the Fund could take to improve its management. Our report focuses on the first round of awards, which the Fund made in 1996. To accomplish our first objective, we reviewed the Fund’s process for setting goals and developing performance measures with the 1996 awardees and discussed the Fund’s performance measurement system with responsible officials at the Fund’s headquarters in Washington, D.C. We also randomly selected six awardees as case studies to gain these awardees’ perspectives on the process of developing performance measures. In addition, we conducted a national survey of 925 CDFI organizations to obtain information on the performance measurement and monitoring systems used in the CDFI field. We reviewed the Fund’s statutory, regulatory, and other reporting and monitoring requirements and spoke with officials at the Fund and at our case studies to assess the Fund’s progress in developing systems for monitoring and evaluating awardees’ progress. Finally, we reviewed quarterly progress reports submitted by 19 of the 1996 awardees and held discussions with case study officials to assess the awardees’ progress. To accomplish our second objective, we reviewed the Fund’s guidance, policies, procedures, and other materials on the BEA awards process and discussed these and other issues related to the program with responsible Fund officials. We obtained data on the banks’ performance from the Fund’s status report on activities completed as of January 1998. In addition, we conducted case studies of five awardees that, collectively, provided the full range of activities for which banks can receive awards. To accomplish our third objective, we reviewed the Fund’s most recent strategic plan, developed in September 1997 in accordance with the Results Act’s requirements and OMB’s guidance. We judged the quality of the plan as a whole and of its primary components, using our knowledge of the program and using guidance developed by GAO for evaluating agencies’ strategic plans. We provided a draft of this report to the CDFI Fund for its review and comment. The Fund’s comments are addressed at the end of each applicable chapter. We performed our review from July 1997 through June 1998 in accordance with generally accepted government auditing standards. We relied on data provided to us by the Fund, by awardees in both the CDFI and the BEA programs, and by CDFIs responding to our survey. A more detailed discussion of our methodology appears in appendix I. In entering into assistance agreements with the 1996 awardees in the CDFI program, the Fund complied with the CDFI Act’s requirements for negotiating performance measures based on the awardees’ business plans. However, opportunities exist for the Fund to improve the nature, completeness, and specificity of the performance measures that it negotiates with future awardees. Our evaluation of the 1996 assistance agreements revealed (1) an emphasis on measures of activity, such as the number of loans made, rather than on measures of accomplishment, such as the net number of jobs created or retained; (2) the occasional omission of measures for key aspects of goals; and (3) the widespread omission of baseline data and information on target markets, needed to track progress over time. Including more requirements in the assistance agreements for reporting accomplishments could help to keep both the awardees and the Fund focused on the primary purposes of the CDFI program, and using more complete and specific goals and measures could facilitate the Fund’s monitoring and evaluation of awardees’ performance. The Fund is just beginning to develop mandated monitoring and evaluation systems. It has established reporting requirements for awardees to collect information for monitoring their performance, and it is developing postaward monitoring procedures for using this information to assess their compliance with their assistance agreements. Although the Fund has published some “success stories,” it has not established a system for evaluating the impact of awardees’ activities. Most CDFIs only recently signed their assistance agreements. Therefore, any reports of progress are limited and preliminary. Furthermore, the many different types of CDFIs support different types of activities and use different performance measures, making any general assessment of their progress difficult. The CDFI Fund’s progress in developing performance goals and measures for awardees in the CDFI program is mixed. On one hand, as of January 1998, the Fund had entered into assistance agreements with 26 of the 31 CDFIs that received awards in 1996 and had disbursed approximately $31 million of the $37 million set aside for this first round of awards under the program. As the CDFI Act requires, these agreements include performance measures that (1) were negotiated with the awardees and (2) are generally based on the awardees’ business plans. On the other hand, the performance goals and measures that the Fund negotiated with the 1996 awardees fall somewhat short of the standards for performance measures established in the Results Act. Because the CDFI Act provides no further guidance on developing performance measures, we drew on the Results Act’s standards for our evaluation, even though the performance measures in the assistance agreements are not subject to the Results Act. In part, the CDFIs’ widespread use of activity measures, rather than accomplishment measures, is attributable to concerns about isolating the results of community development initiatives from the influences of other factors, which may be beyond the awardees’ control. In addition, related concerns about the Fund’s possible imposition of sanctions appear to further deter the use of accomplishment measures. However, accomplishment measures are important to focus attention on the desired results of the Fund’s investments and can, in our view, be negotiated so as to address these concerns. Lack of written guidance on developing performance measures for the 1996 awardees may be partly responsible for the occasional omission of measures for key aspects of some goals. Finally, baseline data and data on target markets generally do not appear in the assistance agreements. Although we found that these data were generally available in other documents—and, according to Fund officials, were used in setting performance levels—their absence from the agreements could hinder the efficient evaluation of awardees’ performance. In total, the Fund negotiated 87 performance goals and 165 performance measures in the 26 assistance agreements that it signed with awardees through January 1998. These goals and measures were consistent with the CDFI program’s mission of promoting economic revitalization and community development and were generally based on the awardees’ business plans. To provide a framework for negotiation, the Fund developed a performance schedule that proved flexible enough for the many different types of CDFIs to tailor their performance measures to their particular activities. This schedule appears in each of the 1996 assistance agreements and is intended to be a complete description of each awardee’s planned performance for the period of the award (generally 5 years or more). Besides performance goals and measures, the schedule includes benchmarks (expressed in ranges) for evaluating the awardee’s level of performance as of the evaluation date and, optionally, at one or more interim dates. Finally, the schedule includes assumptions about external factors (i.e., factors outside the awardee’s control) that could affect the awardee’s performance. Table 2.1 illustrates the schedule, using one of the more comprehensive schedules prepared by a 1996 awardee. The performance schedule documents the results of the negotiations between the Fund and the awardee and sets forth the standards that the Fund will use to hold the awardee accountable for its use of the Fund’s money. If the awardee does not achieve at least a satisfactory level of performance for each measure, the Fund has the authority to impose sanctions (referred to as remedies), set forth elsewhere in the assistance agreement. Sanctions vary in severity. While some, such as requiring a change in the awardee’s performance goals, are relatively minor, others, such as requiring the repayment of any assistance that has been distributed to the awardee and/or barring the awardee from applying for any future assistance from the Fund, are much more severe. Our analysis of six 1996 awardees’ assistance agreements shows that their performance goals were generally based on the awardees’ business plans, as the CDFI Act requires. In some instances, the performance goals in the agreements also incorporated updates or adjustments reflecting changes that occurred after the business plans were submitted with the awardees’ application packages. For example, the Fund instructed the awardees to incorporate in their performance goals any changes that had taken place in their funding, operations, and/or target markets. In addition, Fund staff told us that they considered the performance projected in some of the business plans to be overly optimistic, and they said they used their expertise in CDFI subfields (e.g., loan funds, credit unions, venture capital funds) to help the awardees set more realistic, reachable benchmarks. Both the business plans and the performance goals in the assistance agreements supported the CDFI program’s mission of promoting economic revitalization and community development. The business plans, which formed the most significant component of each awardee’s application package, were required to include information about the anticipated impact of the awardee’s planned activities on the target community. These plans served, in large measure, as the basis for selecting awardees to participate in the program. The plans also provided the basis for negotiating the performance goals and measures in the assistance agreements. According to our analysis, 98 percent of the performance goals in the assistance agreements were consistent with the CDFI program’s mission of promoting economic revitalization and community development. However, as discussed below, the business plans we reviewed incorporated more accomplishment measures than the awardees’ assistance agreements. The Results Act, supplemented by Circular A-11, the Office of Management and Budget’s (OMB) implementing guidance, and a GAO document entitled The Results Act: An Evaluator’s Guide to Assessing Agency Annual Performance Plans (GAO/GGD-10.1.20, Apr. 1998) provides more explicit guidance for developing performance goals and measures than the CDFI legislation. Although the Results Act does not apply to awardees, it establishes the federal government’s standards for performance measurement, and its provisions apply to all federal agencies, including the Department of the Treasury, of which the CDFI Fund is a part. Thus, consistency between the awardees’ performance goals and measures and the provisions of the Results Act will facilitate the Fund’s compliance with the Results Act. In broad terms, the Results Act and its guidance (1) consider measures of accomplishments (outcomes) preferable to measures of activities (outputs); (2) recommend the use of objective measures that adequately indicate progress towards the performance goals; and (3) say that the goals and measures should be measurable and quantifiable. According to the Results Act and its guidance, both activity and accomplishment measures can be useful. However, the act and guidance regard accomplishment measures as better indicators of a program’s results because they describe the effects of an organization’s activities on the populations that are expected to be served. The 26 assistance agreements that we reviewed contain relatively few accomplishment measures. Of the 165 measures, we identified 25, or about 15 percent, that were accomplishment measures. As a result, the assistance agreements focus primarily on what the awardees will do, rather than on how their activities will affect the distressed communities. This limited use of accomplishment measures contrasts with the widespread use of such measures that the CDFI field reported to us and that we observed in the business plans of our six case study awardees. To determine how the CDFI field assesses progress toward meeting its goals, we surveyed 925 CDFIs nationwide and received responses from 623 of them. Among the respondents were 24 of the 26 CDFIs that had signed (closed) assistance agreements as of January 1998. According to the responses we received, the CDFIs commonly use both accomplishment and activity measures to assess their progress. For example, 91 percent of the CDFIs providing lending services identified at least one quantifiable lending accomplishment measure, and nearly 60 percent identified three or more such measures. The responses of the 24 CDFIs with closed assistance agreements were consistent with the responses of the CDFIs who responded to our survey and indicated a wider use of accomplishment measures than we observed in the assistance agreements of these 24 CDFIs. Table 2.2 compares the use of accomplishment measures as reported to us by the 24 awardees and as shown in their assistance agreements. When we compared the types of performance measures that our six case study awardees used in their business plans and in their assistance agreements, we found that the awardees generally included both activity and accomplishment measures in their business plans but were less likely to include accomplishment measures in their assistance agreements. For example, one awardee, a community development loan fund, included several activity measures (e.g., the number of loans made and the dollar volume of the loans) and accomplishment measures (e.g., the number of jobs created, stabilized, and upgraded) in its business plan. By contrast, the awardee included only activity measures (e.g., the number and dollar volume of loans made) in its assistance agreement. Another awardee, a microenterprise loan fund, included both activity measures (e.g., the number and dollar volume of loans disbursed) and accomplishment measures reflecting the potential effect of these loans (e.g., increases in business assets and in the number of new jobs) in its business plan. In its assistance agreement, however, the awardee included only activity measures (e.g., the number and dollar volume of loans disbursed). Only one of the six case study awardees included an accomplishment measure in its assistance agreement that did not appear in its business plan. According to most of the case study awardees and Fund officials who negotiated with the awardees, the 1996 awardees were reluctant to incorporate accomplishment measures into their assistance agreements for two major reasons. First, they were concerned about being able to demonstrate that their activities had produced specific effects in distressed communities. Second, they were concerned that the Fund might impose sanctions if they could not achieve results that were, to some extent, beyond their control. Both the case study awardees and Fund officials recognized that it is difficult to isolate the effects of a CDFI’s community development activities from the effects of external factors, such as economic, social, and political conditions. Because they could not demonstrate a direct causal link between these activities and results in the community, they tended to negotiate measures of activities—what the awardees were doing—rather than measures of accomplishments—how their activities were improving economic conditions in distressed communities. While this approach is understandable, it goes beyond the requirements of performance measurement. Specifically, performance measurement is intended to identify and track the changes associated with a program but does not require the establishment of an exclusive causal link between a program and an observed change. Establishing a direct causal link is a more complex task, often requiring controlled studies or statistical analyses, and is the purpose of impact analysis, discussed later in this chapter. Nevertheless, tracking changes associated with a program is important because it focuses attention on, and provides an indication of, the program’s results. The 1996 awardees were also reluctant to incorporate accomplishment measures into their assistance agreements because they were concerned about the Fund’s possible imposition of sanctions. As several awardees pointed out, accomplishments may be subject to outside factors and can, therefore, be harder to control than activities. Consequently, most of these awardees did not want to be held contractually accountable for meeting benchmarks for accomplishment measures over which they had limited control. As is clear from the array of assumptions set forth in the performance schedule illustrated in figure 2.1, an awardee’s performance may be subject to many factors beyond the awardee’s control—including the continuation of funding from other sources, interest rates, economic conditions, property taxes, the reputation of the local public schools, and the rate of inflation. The application of sanctions is at the Fund’s discretion, and the Fund has not yet demonstrated how it will apply sanctions for poor performance. The 1996 awardees were, therefore, uncertain about how the Fund would use sanctions. Even though the Fund stipulated in the 1996 assistance agreements that it would generally not impose the more severe sanctions—that is, those requiring the repayment of assistance or reducing or terminating assistance—for failure to meet the benchmarks, the perceived threat of sanctions limited the types of measures considered during the negotiation of performance measures and benchmarks, according to three of the six case study awardees. For the 1997 awardees, the Fund has revised both the performance schedule and the assistance agreement. As table 2.3 illustrates, the revised schedule reduces the duration of the benchmarks from 5 years to 1 year. The revised agreement reintroduces the potential use of all the sanctions for unacceptable performance. Thus, the Fund could require the 1997 awardees to repay the assistance they have received if they do not meet their benchmarks. According to a senior Fund official, the Fund changed the duration of the benchmarks to facilitate compliance with the statutory requirement that it submit an annual report on awardees’ performance and to more effectively promote awardees’ accomplishment of their business plans. He also indicated that the Fund reintroduced the potential use of all sanctions so that it would have a full range of options available to address instances of noncompliance. However, with less time to reach the benchmarks and potentially stiffer sanctions for not reaching them, the awardees may be further deterred from using accomplishment measures. Alternatively, limiting the application of sanctions to measures over which awardees have control could allow the Fund to address instances of noncompliance without discouraging the use of accomplishment measures. According to the Results Act and its guidance, performance measures should adequately indicate progress toward the performance goal. To determine, for the 1996 agreements, whether the 165 performance measures indicated progress towards meeting the 87 goals, we systematically assessed whether the measures had a clear, apparent, or commonly accepted relationship to the intended goals and whether the measures appropriately represented the goals without showing any biasin measuring progress toward them. We found that about 96 percent of the measures were related to the goals and about 84 percent were unbiased. For example, one community development credit union’s goal was to significantly expand the availability of financial services to its members. The measures that accompanied this goal were the number of new checking accounts opened, the number of ATM cards issued, and the number of credit cards issued. These measures were related to and appropriately represented the goal because they were limited to financial services that the credit union currently offered to its members. The Results Act and its guidance also indicate that performance measures should address all key aspects of goals. Although we did not attempt to quantify the degree to which groups of measures addressed all key aspects of goals, we observed that there were differences in the extent to which goals were adequately addressed. In some instances, a group of two or three measures did not address certain key aspects of a stated goal: One awardee’s goal was to “alleviate poverty by providing substantial loans and technical assistance to promote entrepreneurship and sustainable businesses.” The awardee’s assistance agreement included three measures for providing loans—the dollar amount of loans disbursed to nonprofit organizations, the number of borrowers receiving loans from the awardee, and the number of borrowers remaining in operation since the loans were made to them. However, the assistance agreement included no measures pertaining to entrepreneurship or technical assistance. In other instances, the measures as a group seemed to address all key aspects of the goal: One awardee’s goal was to “increase job opportunities by encouraging the start-up of new businesses and the expansion of existing businesses.” The associated measures were “the net increase in the number of full-and part-time jobs in businesses financed by the awardee as of April 1997,” and “the net increase in salaries and wage expenses of businesses financed by the awardee as of April 1997.” If we assume that the awardee will be funding only business start-ups or expansions, then the measures address the key aspects of increasing job opportunities—increasing the number of full- and part-time jobs and increasing wage expenses. Under the Results Act and its guidance, performance goals should be measurable and quantifiable. These characteristics, which are important to facilitate monitoring and evaluation, are best achieved through the use of specific units, well-defined terms, and baseline and target (end) values and dates. Identifying the geographic areas or populations to be served in the goal statements also makes monitoring and evaluation easier. Our analysis showed that the performance goals and measures in the 1996 assistance agreements had most of these features: All of the goals had at least one measure with a defined unit, such as a “loan,” and most (95 percent) of the units were at least minimally specific (e.g., “housing loans”). In nearly all (95 percent) of the goals and measures, the terms were generally known or, if not known, defined. Essentially all (99 percent) of the goals and measures had target values and dates that, together, comprise the benchmark (see fig. 2.1). While the performance measures in the 1996 performance schedules had several measurable or quantifiable features, they generally lacked baseline information. Specifically, 95 percent of the goals and measures lacked baseline values and 93 percent lacked baseline dates. Including such values and dates in the performance schedule is important because it establishes a context for understanding the significance of the benchmark ranges and ensures that the Fund and the awardee have an agreed-upon basis for measuring performance. Fund officials told us that they used specific baseline dates and values in negotiating the benchmarks for awardees’ performance measures. For some case study awardees, we were able to identify this information in other documents. However, having to refer to other documents to obtain this information undermines the assistance agreement as a ready reference for understanding and assessing progress toward an awardee’s stated goals. We also analyzed whether the goal statements identified the populations served by the awardees. The majority (83 percent) of the goals and measures identified the geographic areas or populations to be served; however, the level of detail varied widely. For instance, some specified “the investment area” (e.g., a particular geographic area that can help define an awardee’s target market), while others simply identified a particular city or portion of a state (e.g., southwest Pennsylvania). Still others provided no information about the geographic areas or populations to be served. Moreover, it was often unclear how the area or group identified in a measure was related to the target market defined in the awardee’s certification documentation. If information about the target market does not appear in the performance schedule, the schedule creates no context for understanding why specific measures are targeted to particular groups or areas. Like the omission of baseline information, the omission of information about the target market is inconsistent with the usefulness of the performance schedule as a ready reference for understanding and assessing progress toward an awardee’s goal. Limitations in the 1996 awardees’ goals and measures can be attributed to several factors, including the recent implementation of the Results Act at all federal agencies, understaffing at the Fund and lack of training in the Results Act’s provisions for its staff, and lack of written guidance and training for the 1996 awardees on developing goals and measures. Over time, the Fund, like other federal agencies, has become more familiar with the Results Act, and it conducted training in April 1998 for its staff on the act’s provisions. The Fund is also hiring additional management and program staff to help implement the CDFI program. Finally, the Fund conducted a workshop for the 1997 awardees on the closing process for the assistance agreements. This workshop included more substantial guidance on drafting performance goals and measures. Although the Fund has developed reporting requirements for awardees to collect information for monitoring their performance, it lacks documented postaward monitoring procedures for assessing their compliance with their assistance agreements, determining the need for corrective actions, and verifying the accuracy of the information collected. In addition, the Fund has not yet established procedures for reporting and evaluating the impact of awardees’ activities. The effectiveness of the Fund’s monitoring and evaluation systems will depend, in large part, on the accuracy of the information being collected. Primarily because of statutorily imposed staffing restrictions in fiscal year 1995 and subsequent departmental hiring restrictions, the Fund has had a limited number of staff to develop and implement its monitoring and evaluation systems. In fiscal year 1998, it began to hire management and professional staff to develop monitoring and evaluation policies and procedures. The CDFI Act stipulates that the Fund shall require each CDFI receiving assistance to submit an annual report on its activities, financial condition, success in meeting its performance goals, and success in satisfying the terms and conditions of its assistance agreement. Before the Fund could review an awardee’s progress, it had to establish financial and performance reporting requirements based on the awardee’s assistance agreement. It then had to develop monitoring procedures to ensure that the required reports were received, reviewed, and acted upon as necessary. To develop reporting requirements, the Treasury’s Office of Inspector General identified for the Fund the types of information that it should collect. The May 1996 Inspector General’s report identified the information that awardees needed to report quarterly and annually so that the Fund could assess their financial condition, performance, and compliance with the terms and conditions of their assistance agreements. The report also suggested that the information furnished by the awardees be supplemented with audited financial statements and reviews of the awardees’ organizations. Such reviews typically include desk reviews and site visits. The Fund’s reporting requirements meet the CDFI program’s statutory review requirements, and the 1996 assistance agreements incorporate the Inspector General’s suggested reporting requirements. Awardees are therefore required to provide both quarterly and annual reports on their progress in achieving their performance goals, submit financial statements certifying their financial viability, and report any changes in their operations that could materially affect their ability to meet the terms and conditions of their assistance agreements. Although the Fund has established reporting requirements for awardees, it lacks documented monitoring procedures for tracking and reviewing the data submitted in the quarterly and annual reports, according to an independent audit recently completed by KPMG Peat Marwick. To develop these procedures, the audit recommended, among other things, that the Fund develop written monitoring procedures, continue to design and develop a portfolio-monitoring database system, and track the receipt of all required reports for all awardees. Fund staff told us that they had not previously developed postaward monitoring procedures because of the CDFI Fund’s initial staffing restrictions and shortages. Now that the Fund has hired additional staff, it has turned its attention to correcting the weaknesses identified by the auditors. The Fund has established a system for tracking the receipt of quarterly and annual reports and the financial statements submitted by awardees. In addition, the Fund has hired two key staff—(1) a chief financial officer for management to oversee all aspects of the Fund’s administrative operations, accounting, reporting, management controls, budget, portfolio monitoring, and compliance with laws and regulations and (2) an awards manager to conduct an initial assessment of the Fund’s monitoring requirements, recommend a monitoring and assessment process for the Fund’s awardees, and integrate these monitoring recommendations into the design of the Fund’s awards database. In addition to the weaknesses identified by KPMG Peat Marwick, we found that the Fund has not yet developed written guidance for verifying the information provided by awardees when they are reporting quarterly or annually on their performance. Furthermore, the Fund has not established plans or procedures for conducting periodic desk audits or site visits or for otherwise verifying the accuracy of the reported performance information. Staffing restrictions go a long way toward explaining why the Fund has not yet developed such written guidance or procedures. The Fund may be able to take advantage of the expertise of the independent auditors that conduct annual audits of the awardees to verify some of the information on performance that the awardees report to the Fund. Since the Fund is required in the assistance agreements to provide guidance to the independent auditors, this guidance could require that the audits, at a minimum, verify the results of activity-based performance measures and financial data. Providing this guidance would enable the Fund to leverage the expertise of the independent auditors in verifying both financial and performance information. To date, the Fund has provided the auditors only with verbal guidance, at their request. This guidance focused on the financial audit and did not address the verification of reported performance data. While providing verbal guidance may meet the provision of the assistance agreement, it does not ensure the same degree of consistency that would be derived from written guidelines. Fund staff have indicated that they intend to develop written guidance for the Fund’s own auditors and independent auditors during fiscal year 1998. Monitoring the compliance of awardees will require reviews by staff who understand CDFIs and can systematically assess their progress and evaluate their compliance with their assistance agreements. For example, each year, for the terms of their agreements, the 26 1996 awardees whose agreements we reviewed will be required to submit a total of 104 quarterly reports and 26 annual reports. For each report, the Fund will need to assess the awardee’s compliance with (1) agreed-upon performance benchmarks, (2) financial soundness covenants, and (3) requirements for serving target markets. The Fund will also need to decide how to respond when an awardee’s compliance is less than satisfactory. Such a decision will likely involve analyzing the awardee’s operations, the target market, and the influence of the external factors identified in the performance schedule. The Fund is developing procedures for reviewing awardees’ quarterly and annual reports and following up, as necessary, on any preliminary indications of noncompliance with provisions of the assistance agreements. When all 31 of the 1996 agreements and all 48 of the 1997 agreements are executed, the Fund will be receiving 316 quarterly reports and 79 annual reports each year. This volume will grow as the Fund makes future awards under the CDFI program. Currently, the program staff who will be reviewing these reports will also be responsible for certifying CDFIs and closing agreements with later awardees. As the volume and complexity of the Fund’s monitoring responsibilities grow, the Fund is likely to need additional monitoring capacity. The CDFI Act and the associated Conference Report establish requirements for evaluating and reporting on the CDFI program. The act specifies that the Fund is to annually evaluate and report on the activities carried out by the Fund and the awardees. The Conference Report elaborates, stating that the Fund’s annual “performance report” will (1) analyze and compare the overall leverage of federal assistance with private resources and (2) determine the impact of spending these resources on the program’s investment areas, targeted populations, and qualified distressed communities. To generate data for these analyses, the statute requires the awardees to report annually on their progress in carrying out their assistance agreements and to compile such data as the Fund considers pertinent on the gender, race, ethnicity, national origin, or other characteristics of individuals served by the awardees to ensure that the targeted populations and low-income residents of investment areas are adequately served. As directed by the Conference Report, the Fund has published an estimate of the private funding leveraged by the CDFI program. According to the Fund’s second annual report, the $37 million in assistance awarded to CDFIs in 1996 will leverage approximately three to four times that amount in new capital over the next several years. This estimate is based on discussions with CDFIs and CDFI trade association representatives, not on financial data collected from the awardees. Eventually, to better comply with the leveraging requirement, the Fund will need to develop and disseminate to awardees a method for calculating how much nonfederal assistance they are able to leverage with their awards. In addition, the Fund will need to develop a method for aggregating the leveraging ratios of the individual awardees into a leveraging ratio for the entire CDFI program. To determine the effect of awardees’ expenditures on the CDFI program’s investment areas, targeted populations, and qualified distressed communities, the Fund has thus far compiled and published, in its 1996 and 1997 annual reports, general descriptions of a few CDFIs’ activities and anecdotes about individuals served by selected CDFIs. These reports, however, do not include analyses of the gender, race, ethnicity, national origin, or other characteristics of individuals served by the awardees. This demographic information has just started to become available to the Fund through the 1996 awardees’ annual reports. It is still too early for the Fund to conduct a comprehensive evaluation of the CDFI program’s impact. The Fund made its first investment in a CDFI just 17 months ago, and data for a comprehensive evaluation are not yet available. Eventually, however, the Fund will need to conduct systematic research and analysis to determine the impact of awardees’ expenditures. Although the CDFI Act does not indicate how the Fund is to perform impact analyses, in prior work on economic development programs, we have reported that they must be based on research that associates economic improvements in investment areas with a program’s expenditures and accounts for the influence of other factors. As noted, isolating the impact of community development initiatives from other influences is a particularly challenging task. Satisfying the Conference Report’s requirement for impact analyses, in our view, will be a long-term, ongoing commitment for the Fund that will require expertise in evaluation and procedures for systematically gathering and analyzing information as it becomes available. Fund officials have acknowledged that their evaluation efforts must be enhanced, and they have planned or taken actions toward improvement. For instance, the Fund has begun hiring staff to conduct and/or supervise research and evaluations and has revised the assistance agreements for the 1997 awardees to require that they submit annual impact reports. However, although the Fund has recently hired a program manager for policy and research, it has not yet reached a final decision on what information it will require from the awardees for the Fund to use in evaluating the program’s impact. Two key sources of information about the program’s impact are available for each awardee—the business plan, which includes a community impact section describing the awardee’s projected accomplishments over a 5-year period, and the performance schedule, to the extent that it includes performance information for accomplishment measures. The Fund also has to determine how it will integrate awardees’ reported performance and the lessons learned from related research in the CDFI field into its evaluation plans. Given that most CDFIs only recently signed their assistance agreements, reports of accomplishments in the CDFI program are limited and preliminary. Furthermore, given that the different types of CDFIs support different types of activities and use different performance measures, any general summary of their performance will be difficult. The vast majority of the 1996 awardees signed their assistance agreements between March 1997 and October 1997. Therefore, the Fund has only begun to receive the quarterly reports it requires on their performance. Through February 1998, the Fund had received 41 quarterly reports from 19 CDFIs. Sixteen of these CDFIs submitted either two or three quarterly reports while the other three CDFIs, which had closed their agreements with the Fund later in 1997, submitted one report. As of February 1998, the Fund had received three annual reports. Neither the Fund nor an independent auditor had verified the accuracy of any of the data submitted in either the quarterly or the annual reports received by the Fund. The 19 CDFIs that submitted quarterly reports through December 1997 include at least one of each of the five principal types of CDFIs introduced in chapter 1, as well as two other types of CDFIs—an intermediary for community development corporations that provides CDFIs with financial and technical assistance and a multifaceted community development corporation that manages more than one type of activity—a loan fund and a credit union. Figure 2.1 breaks down the 19 CDFIs by type. To analyze the performance reported by the 1996 awardees through February 1998, we categorized the 165 performance measures included in the 26 assistance agreements that we reviewed. These measures fell into over 40 different categories, from lending to investment to training. The different types of CDFIs both resemble and differ from one another in the types of activities they support and in the performance measures they use. For example, all 10 of the nonprofit loan funds make loans, and many use the same performance measures—increases in the total numbers and dollar amounts of loans made and increases in net assets—but the different funds make different kinds of loans, from loans to small businesses; to loans for major home improvements; to loans to borrowers who are elderly, disabled, or have special needs. One of the loan funds also uses a unique performance measure—increases in the number of new loan products—to track its progress in supporting innovative activities. Examples of new loan products include loans for child care facilities and loans for community water and wastewater systems. Given the variety of measures used, it is difficult to summarize all of the activity reported by the 19 awardees to date. To illustrate the cumulative reported activity, we totaled the data for the two most common measures—the total number of loans for both general and specific purposes and the total dollar value of these loans. According to our analysis, since the quarters in which the agreements were closed, the CDFIs cumulatively reported making over 1,300 loans totaling about $52 million. Of these, we identified 112 as business or commercial loans totaling about $7.4 million. Another 264 loans, totaling more than $15 million, were made either to individuals or to communities and included mortgage loans for purchasing or rehabilitating homes, personal loans, or loans for financing facilities. In addition, the CDFIs reported providing consumer counseling and technical training to 480 individuals or businesses. We derived another summary observation of the effect of the CDFI program’s funding to date from our survey of the 24 1996 awardees with signed assistance agreements. When we asked them to identify the extent to which the funds they had received from the program had enabled them to increase their services or their client/customer base, 12 indicated either a very great or a great increase, 7 indicated either a moderate or some increase, 4 reported that it was too early to tell, and 1 did not respond to the question. Revisions to the Fund’s assistance agreement should be easy to implement. For example, ensuring that measures address all key aspects of goals and adding baseline dates and values and information on target markets to facilitate program evaluation should not be difficult. However, encouraging the greater reporting of accomplishments will require overcoming awardees’ concerns about the possible imposition of sanctions for not meeting benchmarks for measures that are, to some extent, beyond the awardees’ control. These concerns may be alleviated by requiring the awardees to report accomplishments only in their annual impact reports. Awardees would be sanctioned only if they failed to submit the required reports. In large part because of staffing limitations, the Fund has not yet completed postaward monitoring and evaluation systems. It recognizes the importance of such systems and is in the process of hiring staff and determining the systems’ requirements. The sooner the Fund completes this important task, the better it will be able either to identify and correct any instances of noncompliance or to identify and implement any opportunities for improvement. To strengthen performance measurement in the CDFI program, GAO recommends that the Secretary of the Treasury instruct the Director of the Fund to take the following steps: Encourage the greater reporting of accomplishments by awardees. To allay the concerns described by the 1996 awardees, the Director could require that accomplishments be reported in each awardee’s annual impact report. Accomplishment measures should include (1) those that are, without limitation, negotiated with each awardee on the basis of the awardee’s business plan and (2) those that are, to the maximum extent practicable, related to the awardee’s performance goals and measures. Establish procedures for systematically reviewing each awardee’s goals to ensure that the measures address all key aspects of the related goals. Reformat the assistance agreement to include baseline dates and values and information on target markets. The Fund’s Director and Deputy Director for Policy and Programs generally agreed with the information presented in this chapter. In addition, they suggested several revisions to improve the accuracy of the information. We have incorporated these revisions. While these officials agreed with our conclusion that the Fund needs to encourage awardees to increase their reporting of accomplishments, they disagreed with our draft report’s proposed recommendation to (1) include accomplishment measures in the performance schedule of the assistance agreement and (2) waive the use of sanctions for measures outside the awardee’s control. As an alternative, the officials suggested that accomplishment measures be included in the annual impact report that the Fund will be requiring from each awardee. Such measures would not be subject to sanctions, and sanctions could be applied only when an awardee failed to submit the required report. Consequently, this approach would not require the Fund to apply sanctions for some measures and waive them for others. Because the Fund’s proposed alternative should achieve GAO’s and the Fund’s mutual objective of increasing awardees’ reporting of accomplishments, we revised our conclusions and recommendation accordingly. Finally, the Fund concurred with our two recommendations for revising the assistance agreement to ensure that awardees’ performance measures address all key aspects of their related goals and include baseline dates and values and information on target markets. Banks have completed most of the activities for which they received 1996 bank enterprise awards. However, isolating the impact of the BEA program is difficult because other regulatory or economic incentives can encourage the same types of investment as the prospect of an award, and limitations in some banks’ data preclude the Fund, in a few cases, from knowing for certain that awardees have increased their net investments in targeted distressed communities as intended. Because the Fund did not require the banks that received 1996 awards to report material changes in the status of rewarded activities, it did not know whether rewarded investments remained long-term increases in the capital available to CDFIs and distressed communities. In addition, because the BEA program rewards banks for increasing activities that they undertake with their own funds, it does not (by statute) address what banks do with their awards. Nevertheless, most banks have reported that they have voluntarily reinvested their awards in lending and investments related to community development. In 1996, the Fund announced a total of $13.1 million in awards to 38 banks. These awards were based on reported increases of $65.8 million in investments in CDFIs and $60 million in loans and financial services in distressed communities. Located in 18 states and the District of Columbia, the 38 banks held assets ranging from $21 million to $320 billion. As figure 3.1 shows, about half of the increased activities that led to the awards supported CDFIs and about half promoted development in distressed communities. Increase in CDFI-related suppport activities (in dollars) As figure 3.2 shows, nearly three-quarters of the banks that supported CDFIs did so through equity investments while the remaining quarter did so through loans. The Fund has attempted to measure the impact of the BEA program by estimating its leverage, that is, the amount of the private investments associated with the awards. The Fund estimates that the $13.1 million in 1996 awards leveraged over $125 million in private investments, a leveraging ratio of almost 10 to 1. This estimate includes amounts in the rewarded banks’ baselines for similar loans and investments made before the 6-month assessment period and included in the base amounts from which the Fund measured the increases in investment. If the leveraging effect of the BEA program were calculated only on the basis of the increases in investment that led to the awards, we estimate that the leveraging ratio for the 1996 awards would be about 7 to 1—a ratio generally consistent with the 15-percent award for increases in equity investments, which represent about three-quarters of the program’s rewarded activities. As of January 1998, 58 percent of the 1996 awardees (22 of 38 banks) had completed all of the activities for which the Fund had announced an award. In addition, the Fund had disbursed nearly 80 percent of the 1996 award funds. The remaining 20 percent of the award funds were reserved for 16 banks that had not yet completed their planned activities or sought disbursement of their award funds. These 16 banks have just under 4 years to complete these activities and request disbursement of the associated award funds. Four of our five case study banks did not complete as much lending and other financial service activity as they had expected. According to officials from three of these banks, they generally included activities in their applications that they believed would reach closing within the next 6-month assessment period, but some of these activities took longer to reach closing or never closed at all. Officials from one bank said that they did not find it unexpected or uncommon that, on occasion, a variety of factors could slow the normal process of negotiating loan terms, causing minor delays in moving a project to closing. For some of our case study banks, delays meant that a small number of the activities projected to close within the 6-month assessment period took longer than expected and closed after that period had ended. As a result, these banks increased their activities less than projected during the assessment period and will not receive the portions of their 1996 awards associated with the unaccomplished activities. Award funds that are not disbursed to banks remain with the Fund and are available for reallocation to future awardees or for use in the CDFI program as long as the budget authority for the funds has not expired. According to officials at our case study banks and Fund officials, banks invest in CDFIs and provide loans and other financial services in distressed communities for a variety of reasons. In some cases, the prospect of receiving a BEA award may encourage such investment, but in other cases, regulatory or economic incentives exert a strong influence. As a result, it is difficult to isolate the impact of the BEA award from the effects of other incentives. Furthermore, more complete data on some banks’ investments are needed to ensure that the increases in investments in distressed areas rewarded by the BEA program are not being offset by decreases in other investments by the banks in the same areas. Finally, because the Fund did not require banks receiving 1996 awards to report any material changes in rewarded investments, it did not know whether rewarded investments remained long-term increases in the capital available to CDFIs or distressed communities. Officials at our case study banks and Fund officials told us that the BEA program creates an incentive for or reduces the risk associated with a bank’s initiating and/or increasing investments in CDFIs and/or distressed communities. For example, according to an official at one of our case study banks, the bank would not have invested $250,000 in a local nonprofit CDFI’s predevelopment loan program if this activity had not been eligible for a 15-percent reward from the Fund. Typically, this official said, the bank does not support predevelopment loan programs, but the prospect of receiving an award mitigated some of the risk associated with this loan, enabling the bank to favorably consider and subsequently make this investment. As further evidence that the program is encouraging investment in CDFIs, a Fund official said that CDFIs have begun to market themselves to banks by noting that new or increased investments in them can result in awards from the Fund of up to 15 percent of the new or additional investment. Our case study banks indicated, and Fund officials agreed, that the prospect of receiving a BEA award was not always the primary incentive for banks to undertake award-eligible activities. Rather, regulatory or economic incentives were important or had already prompted the banks to undertake such activities before they considered applying for a BEA award. One of the reasons our case study banks cited most frequently for undertaking award-eligible activities was to comply with the Community Reinvestment Act (CRA). CRA requires that federal bank regulatory agencies encourage the banks they regulate to help meet credit needs in all areas of the communities served (insofar as is consistent with safe and sound operations), assess the banks’ performance in meeting those needs, and take this performance into account when considering a bank’s request for regulatory approval of a regulated action, such as opening a new branch or acquiring or merging with another bank. Under CRA, banks do not receive direct financial or other rewards from the regulatory agencies for compliance. Also, the BEA program requires that banks not only invest in distressed areas but also increase their investments over and above what they have been doing. The bank regulatory agencies assess each bank’s performance in meeting CRA’s requirements every 2 to 3 years. During these reviews, the examiners check to see whether CRA compliance activities are an ongoing part of the bank’s business. Because BEA-eligible activities, as a rule, count in terms of compliance with CRA, most banks are likely to be engaged in some activities that the BEA program rewards, and when the banks can show an increase in such activities over an assessment period, they can simultaneously demonstrate their compliance with CRA and their eligibility for a BEA award. This overlap between activities that meet CRA’s requirements and are eligible for a BEA award held true at our case study banks. All of the activities that counted toward compliance also counted toward an award. Furthermore, two of the banks received BEA awards totaling over $324,000 for increases in investments they had made or agreed to make as part of ongoing working relationships with CDFIs that predated their applications for the awards. Another bank based its application for a $1.6 million award on those previously planned activities that bank officials judged most likely to reach closing during the bank’s 6-month assessment period. The Fund itself acknowledges that compliance with CRA may be a major incentive for some banks to undertake award-eligible activities. In fact, the Fund advertises the link to compliance with CRA in the promotional materials it distributes to banks when it publicizes the BEA program. Economic incentives also motivated banks’ investments in award-eligible activities. According to officials at our case study banks, they do the kinds of business that the BEA program rewards in the markets targeted by the BEA program because they benefit from such business and/or have made a corporate commitment to community lending. The banks benefit, the officials said, because their award-eligible investments help them to maintain market share in areas targeted by the BEA program, to compete with other banks in these areas, or to build up markets that they expect will be profitable in the future. In addition, one bank indicated that its award-eligible activities lay the groundwork for building new markets with community groups, and two other banks cited improved community relations as a further incentive for their activities. Although banks may derive future as well as current economic benefits from their award-eligible activities, these activities have to stand on their own without a subsidy from any of the banks’ other lines of business, according to officials at all five case study banks. These officials cited the solid track records of the CDFIs with which their banks were already working and/or the successful performance of their banks in distressed or low- and moderate-income communities as important reasons for continuing to do business there. The case study banks had measured their performance and demonstrated their success in distressed communities through measures such as loan repayment rates; reports on occupancy rates and the financial performance of housing projects financed by the banks; and, in one case, the frequent presence of a bank official on CDFI investees’ boards of directors. Because awards in the BEA program can be relatively small (especially for larger banks) and are made retrospectively (that is, after banks have begun or completed activities), the awards may be too small or may come too late to have much influence on banks’ investment activities. To an extent that neither we nor the Fund can quantify, banks are receiving awards for activities that stem from ongoing relationships with CDFIs, nonprofit groups, or others in the community and/or for investments they would have made without the prospect of an award from the BEA program. The Fund acknowledges that, in a few cases, some of the direct lending and other financial service activities it is rewarding—particularly agricultural, consumer, and small business loans—may come at the expense of similar eligible activities in the same distressed communities the awardees are serving. For this reason, the Fund was concerned that awardees could increase the loans and other services included in their applications while decreasing other eligible activities that are not included in their applications but are targeted to the same communities. To be certain that the overall volume of activity in a distressed community has increased, a bank must know the location of the entity that received the loan, that the location is distressed, and that the overall volume of activity in that location has increased. To readily link a loan’s location with economic indicators of distress can require detailed information, such as data identifying the census tract in which the entity that received the loan is located. If a bank is not tracking the volume of its loan activity by census tract, the Fund cannot be sure that increases in some types of qualified activities have not been offset by decreases in other types of qualified activities in the same distressed community. In reviewing the 1996 applications, the Fund discovered that some banks that were applying for awards on the basis of increases in these activities had not collected detailed information, such as loan activity by census tract, on the location of these activities. Consequently, they could not demonstrate that they had not decreased other award-eligible activities in distressed communities to support the increased activities included in their applications. For the 1996 awards, the Fund allowed banks without the requisite information to sign an addendum to their award agreement in which they stated that, to the best of their knowledge, they had not decreased other eligible activities that were not included in their application. While most such banks signed the addendum and submitted an application, an official at one of our case study banks said that the bank sought an award on the basis of its investments in CDFIs but not on the basis of any of its lending in distressed communities because its data on loans did not include census tract information and it was not willing to risk the chance that its overall level of activity in distressed communities might have decreased. For the 1997 awards, the Fund hired a contractor to help banks determine if the areas they were seeking to target would be eligible under the BEA program’s guidelines. The Fund requires the banks to use this contractor’s services in the initial stages of applying to the program. The contractor determines which census tracts in a bank’s service area meet the Fund’s definition of a distressed community but does not identify which activities took place in those tracts. Many banks were not identifying the census tracts of their consumer, small business, or agricultural loans and could not have determined which ones took place in a distressed community identified by the Fund’s contractor unless they hand-coded or geocoded their lending data. According to a Fund official, using the Fund’s contractor means that each applicant has a better—but not perfect—idea of which loans in its service area took (or will take) place in distressed communities. Nonetheless, a bank that has not geocoded its loans would likely still find it very time consuming to assign a census tract to all of its consumer, small business, and agricultural loans because it would have to do so for all of these loans, not just those included in its application to the BEA program. According to Fund officials, more and more banks are collecting census data on their loans and fewer banks had to sign the addendum for the 1997 awards than for the 1996 awards. However, because the Fund does not require banks to collect census data on their loans, Fund officials acknowledged that, to some extent, the Fund may still be rewarding banks that increased some activities at the expense of others that were not included in their applications but also took place in the same distressed communities. As a result, for banks that receive awards for direct lending and financial services but do not collect census data on these activities, the Fund cannot say with certainty that, in all cases, the BEA program is meeting its objective of rewarding increases in distressed communities because it cannot say with certainty that, in all cases, the rewarded increases did not come at the expense of other award-eligible activities. The 1996 BEA awardees were not obligated to report information, including any material changes in their rewarded investments, once they completed the tasks they had agreed to perform with their own funds to receive their awards. One of our case study banks received an award for increasing its investment in a CDFI that was later dissolved. Specifically, the bank received $37,500, or 15 percent of the $250,000 by which it increased its investment in a certified CDFI during its assessment period. However, several months after the bank received the award funds, the CDFI’s board of directors voted to dissolve the institution because of a declining market for its loans and dissatisfaction with its performance. Upon dissolving the CDFI, the board returned its remaining capital pro rata to the banks that had invested in it, including the bank that had received the award for its increased investment. Thus, even though the bank received an award from the BEA program, the rewarded increase in activity did not last. After our visit and the CDFI’s dissolution, the bank made a commitment to use all of the funds returned to it to capitalize a microloan fund at a different certified CDFI. The Fund was not aware of this material change in our case study bank’s rewarded investment until we brought it to the Fund’s attention. Furthermore, because neither this nor any other 1996 awardee was required to report any material change, the Fund had no systematic means of learning of any significant change in a bank’s rewarded activity. The Fund has since established a requirement for awardees to report any material change in their rewarded investments so that it will be systematically informed of any important reduction in these investments. The CDFI Act does not require banks to report how they use their BEA awards because the objectives of the BEA program and the rules governing it apply only to how the banks use their own funds. However, according to the Fund, most of the 1996 awardees reported using their awards to further the objectives of the BEA program. Each of our five case study banks also reported using its award money to expand its existing investments in community development. For example, one of the banks said that it used its award money to establish a community development leadership curriculum and training program, addressing topics such as innovative economic development and affordable housing strategies. This bank expects that its support will enable the group developing the curriculum to provide training to 100 senior managers from community development organizations. Another of the banks reported using its award funds to make a grant to the National Community Capital Association (formerly the National Association of Community Development Loan Funds) to enable the association to train CDFI staff and board members.However, neither we nor the Fund determined whether the banks used all or a portion of their award funds to benefit communities meeting the same eligibility criteria as those that benefited from the initial increases in the banks’ investments. In commenting on a draft of this report, the Fund’s Director and Deputy Director for Policy and Programs told us that, in response to the information we presented on a material change in one bank’s rewarded investment, the Fund has adopted a requirement for banks to report any material change in their rewarded investments. As a result, we are no longer making the recommendation to this effect that appeared in our draft report. We also made minor technical and clarifying changes to this chapter suggested by the Fund. The Fund’s current strategic plan contains the six basic elements required by the Results Act, but these elements generally lack the clarity, specificity, and linkage with one another that the act envisioned. Although the plan identifies key external factors that could affect the Fund’s mission, it does not relate these factors to the Fund’s strategic goals and objectives and does not indicate how the Fund will take the factors into account when assessing awardees’ progress. In addition, the plan does not explicitly describe the relationship of its activities to similar activities in other government agencies, and it does not indicate whether or how the Fund coordinated with other agencies in developing its strategic plan. Additionally, the capacity of the Fund to provide reliable information on the achievement of its strategic objectives at this point is somewhat unclear. The Fund’s difficulties in developing a strategic plan are not unique. We have found that strategic planning efforts at all federal agencies are still works in progress. Many agencies are struggling, like the Fund, to set a strategic direction, coordinate crosscutting programs, and ensure the capacity to gather and use performance and cost data. As the Results Act directs, the Fund is taking steps to refine its strategic plan. These steps appear to address the difficulties we observed in the current plan. The Results Act requires that an agency’s strategic plan contain six key elements: (1) a comprehensive mission statement; (2) agencywide long-term (strategic) goals and objectives for all major functions and operations; (3) approaches (or strategies), skills, technologies, and the various resources needed to achieve the goals and objectives; (4) a description of the relationship between the long-term goals and objectives and the annual performance goals; (5) an identification of key factors, external to the agency and beyond its control, that could significantly affect the achievement of the strategic goals and objectives; and (6) a description of how program evaluations were used to establish or revise strategic goals and objectives and a schedule for future program evaluations. OMB Circular A-11 provides agencies with additional guidance on developing their strategic plans and discusses additional information that they may include in those plans. The circular emphasizes the importance of agencies’ strategic plans, noting that they “provide the framework for implementing all other parts of Act, and are a key part of the effort to improve performance of government programs and operations.” Because the plan matches programs and activities to the agency’s mission and objectives, it can be used, according to the circular, to align the organization and budget structure of the agency with its mission, guide the agency in formulating its budget, and help the agency set priorities and allocate resources in accordance with these priorities. The Results Act anticipates that an agency may take several planning cycles to refine and perfect its strategic plan. “the mission of the CDFI Fund, as drawn from the Riegle Community Development and Regulatory Improvement Act of 1994, is to promote economic revitalization and community development through investment in and assistance to community development financial institutions (CDFIs) and through encouraging insured depository institutions to increase lending, financial services and technical assistance within distressed communities and to invest in CDFIs.” This statement generally satisfies the requirements of the Results Act and OMB Circular A-11 for a comprehensive mission statement summarizing an agency’s major functions. The statement explicitly refers to the Fund’s statutory objectives and indicates how these statutory objectives are to be achieved through the two core programs. The Results Act and OMB Circular A-11 require that each agency’s strategic plan set out goals and objectives for the major functions and operations of the agency and that the goals and objectives elaborate on how the agency is carrying out its mission. The Fund’s strategic plan articulates five goals, each with at least two objectives, as shown in table 4.1. Although the plan does not characterize the goals as such, we observed that the first three goals are mission related and the last two are organizational. We also noted that the first three goals are results oriented and cover the major functions and operations of the Fund, as the Results Act and OMB Circular A-11 direct. However, the last two goals are process oriented—characterizing how the Fund intends to improve the performance and operations of its own programs instead of projecting an outcome resulting from the Fund’s actions. Because processes are not goals, these two would be more appropriately incorporated in the strategies needed to achieve strategic goals and objectives. OMB Circular A-11 suggests that strategic goals and objectives be stated so as to allow a future assessment of their accomplishment. Because none of the 5 goals and 13 objectives in the strategic plan include baseline dates and values, deadlines, and target markets, the Fund’s goals and objectives do not meet this criterion. In contrast, one of the Federal Emergency Management Agency’s strategic goals is to “protect lives and prevent the loss of property from all hazards.” An objective for achieving this goal is “by the end of fiscal year 2002, reduce by 10 percent the risk of loss of life and injury from hazards.” By including specific deadlines and targets, this objective meets the criterion. The Results Act requires that each agency’s strategic plan describe how the agency’s goals and objectives are to be achieved. OMB’s guidance suggests that the plan briefly describe the operational processes; the skills and technologies; and the human, capital, information, and other resources needed to achieve the goals and objectives. Additionally, Circular A-11 recommends that strategies outline how agencies will communicate strategic goals throughout the organization and hold managers and staff accountable for achieving these goals. The Fund’s plan shows mixed results in meeting these requirements. On the positive side, the plan clearly lists strategies for accomplishing each goal and objective. This approach is preferable to other approaches we have seen in which strategies were not integrally linked to objectives. Because the Fund’s plan more clearly links the key strategies, objectives, and goals, it is more valuable to users. While the links in the Fund’s strategic plan are clear, the strategies outlined in the plan consist entirely of one-line statements. Because they generally lack detail, it is unclear whether their accomplishment would help achieve the plan’s goals and objectives. For example, it is unclear how “emphasizing high-quality standards in implementing the CDFI program” will strengthen and expand the national network of CDFIs. Additionally, in discussing strategies to achieve its goals and objectives, the Fund’s strategic plan does not, as the Results Act requires, describe the resources—such as the staff, capital, and technologies—that are needed to achieve the objectives. Rather, the Fund’s plan contains a separate section that describes, in general terms, the resources needed to implement the entire plan. Specifically, this section states that “the Fund will use many resources to accomplish its goals, including anticipated appropriations, the knowledge and skills of its staff, information technology, financial systems, and operational processes.” However, it is not clear how these resources will be used to implement specific objectives. For example, one strategy—developing and implementing a secondary market initiative—is proposed for achieving one objective—increasing liquidity for CDFIs—but the resources needed to carry out this strategy are not specified. The Fund’s strategic plan also does not include several elements specified in Circular A-11. For example, the plan does not include (1) schedules for initiating or completing significant actions, including underlying assumptions, or (2) an outline of the processes for communicating the goals and objectives to the Fund’s staff and for assigning accountability to managers and staff for achieving the strategic plan’s objectives. Under the Results Act, each strategic goal must be linked to annual performance goals. A performance goal is the target level of performance expressed as a tangible, measurable objective against which actual achievement is to be compared. An annual performance goal is to consist of two parts: (1) the measure that represents the specific characteristic of the program used to gauge performance and (2) the target level of performance to be achieved during a given fiscal year for the measure. While strategic plans are not required to identify specific performance measures, OMB Circular A-11 recommends that the plans briefly relate strategic goals and objectives to annual performance goals. The guidance suggests that the plans also include descriptions of the type, nature, and scope of the performance goals included in the performance plans, as well as the relevance and use of those performance goals to help determine the achievement of the strategic goals and objectives. The Fund’s strategic plan lists 22 performance goals, which are clearly linked to specific strategic goals. However, the plan does not include a key performance goal—leveraging other resources—to meet two of its strategic goals, one of which is “to strengthen and expand the national network of CDFIs” and the other of which is to “encourage investments in CDFIs by insured depository institutions.” While this leveraging goal is embedded in the strategies that the Fund has outlined for achieving these two strategic goals, it is not included as a way to gauge progress under the annual performance goals. Furthermore, the performance goals generally lack the specificity, as well as the baseline and end values, that would make them more tangible and measurable. For example, one performance goal is to “increase the number of applicants in the BEA program.” This goal would be more useful and measurable if it specified the baseline number of applicants as well as the number of additional applicants projected within a specific time frame. Finally, some performance goals are stated more as strategies (e.g., “propose legislative improvements to the BEA program” or “survey program participants on policies and standards”) than as desired results, and the ways in which individual performance goals support strategic goals is not always clear. For instance, it is not readily apparent how the performance goal of proposing legislative improvements to the BEA program will support the strategic goal of encouraging banks’ investments in CDFIs. The Fund’s strategic plan only partially meets the requirement of the Results Act to describe key factors that are external to an agency and beyond its control that could significantly affect the achievement of its objectives. OMB Circular A-11 states that a strategic plan should describe each key external factor, indicate its link with a particular strategic objective, and describe how the achievement of the objective could be affected by the factor. The Fund’s plan briefly discusses external factors that could materially affect its performance, such as national and regional economic trends and changes in the demographics of the labor force that may require the development of a multifaceted and flexible human resource program. However, the plan does not link these external factors to specific strategic objectives. In addition, the plan does not cover all key external factors that could materially affect the Fund’s performance. For instance, the plan does not mention the continuation of outside funding, yet, as indicated in chapter 1, CDFIs must obtain outside funding to be eligible for participation in the CDFI program. Without the continuation of outside funding, the Fund’s ability to expand the network of CDFIs could be substantially diminished. The Results Act, supplemented by OMB’s guidance, requires that strategic plans describe (1) the program evaluations used to prepare the plans and (2) the schedule for future evaluations. The Fund’s strategic plan generally does not discuss the evaluations used in its development. Although the plan refers to past evaluations by the Department of the Treasury’s Office of Inspector General—which the Fund says were used to assist in the “development of the Fund’s programs and design of the Fund’s internal systems,” it is not clear how, or if, these evaluations were used to develop the goals, objectives, and strategies outlined in the Fund’s strategic plan. The Results Act also requires a discussion of completed and future program evaluations, which can be a critical source of information to ensure the validity and reasonableness of goals and objectives and to explain results in the agency’s annual performance plan. The Results Act defines program evaluations as assessments, through objective measurement and objective analysis, of the manner and extent to which federal programs achieve their intended objectives. The Fund’s plan does discuss options that the Fund is considering for evaluating its own effectiveness and its impact on financial intermediaries dedicated to supporting community development. However, the plan does not include a schedule for future program evaluations. Furthermore, the list of options does not refer to the CDFI and BEA evaluations or surveys described in earlier sections of the plan. In its strategic plan, the Fund states that it will “coordinate its strategies with other Treasury bureaus and agencies with similar missions.” The Fund’s strategic plan does not specifically address the relationship of the Fund’s activities to similar activities in other agencies and does not indicate whether or how the Fund coordinated with other agencies in developing the strategic plan. Yet numerous government and private-sector agencies are involved in providing access to capital to achieve community and economic development. Interagency coordination is important for ensuring that crosscutting programs are mutually reinforcing and efficiently implemented. Therefore, the plan would be strengthened if it identified and incorporated some descriptions of other agencies’ programs with similar missions and discussed their influence on the Fund’s strategic objectives. For example, a recent study published by the President’s Community Empowerment Board identified several programs in other federal agencies with missions similar to that of the CDFI program, including the following: The departments of Agriculture and of Housing and Urban Development administer the Empowerment Zone and Enterprise Community program, which was authorized to revitalize deteriorating urban and rural communities. This program targets federal grants to distressed communities for community redevelopment and social services and provides tax and regulatory relief to attract or retain businesses in the communities. In its objectives and in the types of communities it targets, this program is similar to the CDFI program. Furthermore, the program’s structure mirrors that of the CDFI program in that applicants must submit a strategic plan that, like the CDFI program’s assistance agreements, must outline baselines, methods, and benchmarks for measuring their success in the targeted communities. Finally, performance is tracked in both programs to measure the impact of awardees’ activities in the distressed communities. The Small Business Administration (SBA) operates various programs to aid, counsel, assist and protect the interests of small businesses. For example, the Small Business Investment Company (SBIC) and the Specialized Small Business Investment Company (SSBIC) programs are designed to fill the gap between the availability of venture capital and the needs of small businesses that are starting up or growing. SBICs, which are licensed and regulated by SBA, are privately owned and managed investment firms that use their own capital, plus funds borrowed at favorable rates with an SBA guarantee, to make equity investments and/or loans to small businesses. SSBICs invest in small businesses owned by entrepreneurs who are socially or economically disadvantaged. Not only are the objectives of these programs consistent with those of the Fund, in that they provide access to capital for economic development, but they are also similar to those of venture capital CDFIs that provide financial and technical assistance to start-up businesses. Moreover, they are structured similarly to the Funds’ CDFI and BEA programs in that they provide access to capital by leveraging federal resources. SBA also administers a microloan program that increases the availability of very small loans to prospective small business borrowers. Under this program, SBA makes funds available to nonprofit intermediaries, which in turn make loans to eligible borrowers, much as the CDFI program makes funds available to CDFIs, which then make loans to microenterprises. Among private organizations, the Ford Foundation is the largest supporter of community development. Specifically, it supports efforts to create economic opportunities and financial institutions that respond to the needs of the poor, as well as efforts to give the poor greater ownership and control of key community institutions. Several of the CDFIs that received 1996 awards from the Fund also received funding from the Ford Foundation. To measure progress in achieving its strategic objectives, the Fund needs reliable data. The Fund has not yet developed its strategic plan sufficiently to identify the types and sources of data needed to evaluate its progress towards the objectives outlined in the plan. Moreover, according to the KPMG Peat Marwick study identified in chapter 2, as of February 1998, the Fund has yet to set up a formal system, including procedures to continuously monitor, evaluate, and improve the effectiveness of the management controls associated with the CDFI program. These procedures would ensure that the periodic performance reports submitted by awardees are received, reviewed and acted upon by the Fund in the event of potential noncompliance. Until the Fund identifies the types of data needed to monitor and evaluate awardees and incorporates these data needs in a formal system, it will be hampered in its ability to report on its progress toward achieving its stated goals and objectives. The Fund intends to continue designing and developing a portfolio-monitoring database system during fiscal year 1998 as part of its efforts to design and implement its monitoring procedures. The Fund’s strategic plan has shortcomings common to the plans of most other federal agencies. We reported on these shortcomings in our September 1997 review of 27 agencies’ draft strategic plans. We found that a significant amount of work remained to be done by executive branch agencies before their strategic plans can fulfill the requirements of the Results Act, serve as a basis for guiding agencies, and help congressional and other policymakers make decisions about activities and programs. Although all 27 of the draft plans included a mission statement, 21 plans lacked 1 or more of the other required elements. In summary, for the 27 draft strategic plans, we found that (1) most did not adequately link required elements in the plans; (2) several contained goals that were not as results oriented as they could have been; (3) many agencies did not fully develop strategies explaining how their long-term strategic goals would be achieved; (4) most agencies did not identify or provide for coordinating activities and programs that cut across multiple agencies; (5) the limited capacity of many agencies to gather performance information has hampered their efforts to identify appropriate goals and confidently assess their performance; and (6) no agency’s draft strategic plan provided adequately for program evaluations. As is consistent with the Results Act’s requirement that agencies continually refine their plans, the Fund is updating its strategic plan and expects to have a revised plan by August 1998. According to a key Fund official, the updated plan will address not only the shortcomings we identified in a May 1998 authorization hearing on the Fund, but also those cited in the Department of the Treasury’s February 1998 review of the Fund’s plan. The following are among the key changes the Fund plans to make: In revising its strategic goals, it plans to eliminate the two organizational goals (i.e., to improve program performance and management operations) included in table 4.1 because they are not directly related to the Fund’s mission. It plans to change the format for presenting the goals and objectives by linking benchmarks and planned evaluations to each goal, as well as to the key external factors that could affect the Fund’s ability to meet these goals. The Fund believes that this change will improve its ability to assess how well its strategies and approaches for meeting its strategic goals are working. For example, one of the goals proposed in the Fund’s revised plan is to increase participation in the CDFI program. The Fund plans to hold workshops to increase participation in the program and to evaluate this strategy by collecting data to track the number of (1) participants in the workshops, (2) participants who apply for CDFI funding, and (3) applicants who receive CDFI awards. The Fund plans to use this information to evaluate the effectiveness of the workshops in increasing participation. It plans to revise its budget structure to better link program activities with funding sources. By presenting the budget this way, the Fund expects to improve its tracking of the resources used to implement the goals and objectives outlined in its strategic plan, as well as to develop annual plans that tie these goals and objectives to its budget. It plans to revise its performance goals to include a measure of its ability to leverage other resources. It plans to identify crosscutting organizations and programs and assess the extent to which its programs duplicate or complement these efforts. The Fund is revising its strategic plan to address the major shortcomings we observed in the current plan. Through these revisions, the Fund will have better defined its goals and better identified its strategies for achieving them, including its strategy for allocating its resources, thereby laying the foundation for determining its success in reaching results-oriented as well as non-results-oriented goals and objectives. Because strategic planning is not static, the Fund will need to continuously revise and refine its strategic plan to reflect the dynamic nature of the CDFI industry. If this process is done well, the Fund’s strategic planning efforts should facilitate informed communication between the Fund and its stakeholders—that is, those organizations potentially affected by or interested in the Fund’s activities. The Fund officials agreed with the information in this chapter and, as we noted earlier, are taking steps to refine the Fund’s strategic plan. These steps appear to address the difficulties we observed. The legislation creating the Fund required that GAO report on the Fund’s structure, governance, and performance 30 months after the appointment of an Administrator of the Fund. However, as noted in chapter 1 and as agreed with your offices, this report does not review the structure and governance of the Fund because the Department of the Treasury’s Inspector General is conducting an audit addressing these issues. This report discusses (1) the progress of the Community Development Financial Institutions (CDFI) Fund in developing performance measures for awardees in the CDFI program and systems to monitor and evaluate their progress in meeting their performance goals, as well as the accomplishments they have reported to date; (2) the performance of banks under the Bank Enterprise Award (BEA) program, the impact of the program on banks’ activities and on distressed communities, and the uses to which banks have put their award funds; and (3) the Fund’s progress in meeting the Results Act’s requirements for strategic planning and the steps the Fund could take to improve its management. Our report focuses on the first round of awards, which the Fund made in 1996, and draws on our interviews with Fund officials; our case studies of awardees, including six CDFIs in the CDFI program and five banks in the BEA program, which we conducted to explore our objectives in more depth; the results of our survey of the CDFI field on its use of performance measures; and our analysis and review of the CDFI program’s assistance agreements. To meet our objectives in reviewing the CDFI program, we reviewed the Fund’s process for setting goals and developing performance measures with the 31 1996 awardees and discussed the Fund’s performance measurement system—including its development, operation, and underlying assumptions—with various Fund officials responsible for working with the awardees. In addition, to supplement the information about the CDFI program that we gathered at the Fund’s headquarters in Washington, D.C., we randomly selected six awardees as case studies to gain these awardees’ perspectives on the process of developing performance measures, as well as to gather data that the Fund does not collect, such as information on the reporting requirements that other funding sources impose on awardees and the performance measures that these sources require. We randomly selected these case studies from the universe of 24 awardees that the Fund told us had signed their assistance agreements by November 1, 1997. Because the CDFI program serves a wide variety of community development organizations, we stratified our random selection by the types of CDFI awardees described in chapter 1 to ensure that our case studies included at least one of each of the most common types of CDFIs. This stratification generally mirrored the categorization used by the CDFI Fund in its first annual report. If a CDFI category was represented by only one or two awardees, then that category was combined with the closest similar category. The six case study awardees were a community development bank holding company, a community development venture capital fund, a community development credit union, a microenterprise fund, a community development loan fund, and a multifaceted community development financial institution. We obtained information from the awardees on the negotiations that took place between each of them and the Fund to develop the performance goals, measures, and benchmarks outlined in their assistance agreements. We discussed this information with knowledgeable Fund staff and reviewed documentation pertinent to the activities for which awards were made, including the awardees’ business plans, assistance agreements, and correspondence about the negotiation process. To describe the performance measurement and monitoring systems used in the CDFI field, we conducted a national survey of CDFI organizations. To identify these organizations, we began by obtaining from the CDFI Fund in October 1997 its most recent list of certified CDFIs, dated June 1997, as well as lists of applicants for awards in the first (1996) and second (1997) funding rounds. In addition, we obtained membership lists from the following national community development professional organizations: the CDFI Coalition, the National Association of Community Development Credit Unions, the National Community Capital Association, and the Community Development Venture Capital Alliance. We also obtained lists from the Neighborhood Reinvestment Corporation and the Aspen Institute that identified neighborhood housing services and microenterprise development organizations with loan funds. Our objective was to survey not only certified CDFIs but also other CDFIs that might be eligible for certification. Using these lists, we identified a total of 925 organizations that described themselves as CDFIs. We do not believe that our list includes all such organizations nationwide. We recognize that there are probably other community development organizations that could be certified by the Fund as a CDFI, but are currently unknown to either the Fund or one of the national CDFI associations. To encourage responses, we sent follow-up letters and a second survey to those organizations that did not return a survey after the first mailing. In total, 623 institutions responded to our questionnaire, for a 67-percent response rate. Respondents included 87 percent of the 1996 awardees and 77 percent of the 187 CDFIs certified by the Fund as of June 1997. To categorize the goals and measures used by the 26 awardees that had signed their assistance agreements with the Fund by February 1998, we conducted a content analysis of the performance goals and measures included in their agreements. The questionnaire used for the survey of all CDFIs provided the framework for this analysis. The questionnaire contains an extensive list of responses that are grouped by the categories of accomplishments and activities for goals and measures. For example, one question asks about measures of community development accomplishments and lists 12 measures that an institution might use. The assistance agreements contained 87 goals and 165 measures. Each goal and each measure was classified as a specific activity or accomplishment by two evaluators who worked independently. Differences were resolved with a third evaluator, so that the evaluators reached complete agreement on their classification of goals and measures. We also evaluated aspects of the quality of the goals and measures in the assistance agreements, using criteria for evaluation developed on the basis of the Results Act, supplemented by Circular A-11, the Office of Management and Budget’s (OMB) implementing guidance, and a GAO document entitled The Results Act: An Evaluator’s Guide to Assessing Agency Annual Performance Plans (GAO/GGD-10.1.20, Apr. 1998). These three documents provide more explicit guidance for developing performance goals and measures than the CDFI legislation. We refer to these three documents as “the Results Act and its guidance” throughout this appendix. These evaluation criteria included the specificity, objectivity, completeness, and appropriateness of the measures. An additional criterion was whether or not the goals and measures addressed the purpose of the CDFI program, that is, to promote economic revitalization or economic development. Using these criteria, we constructed the following set of questions: Does the goal promote economic revitalization and community development? Does each measure have a clearly apparent or commonly accepted relationship to the intended result? Are only certain observations in the measure? Is what is to be observed or measured specified? Is any description of what is being measured given? Do all the measures use terms that are generally known, or if not known, are they described? Is an evaluation date (target date) given? Is an evaluation value (target value) given? Is a baseline date given? Is a baseline value given? Is a target population or geographic area given? Each question could be answered with a yes or a no. For nine questions, the unit of analysis was an individual goal and its associated measure(s), referred to as a “goal statement.” For the two remaining questions, the unit of analysis was an individual measure. Two teams of two GAO evaluators performed the assessments; each team assessed approximately half of the goal statements. The two members of each team independently assessed each goal statement and then compared their answers. Disagreements were discussed and resolved. After the members of each team reached agreement, we compared the assessments performed by the two teams, reviewing the ratings each had given to a subset of 8 goal units (11 measures) that both teams had assessed. Reliability (percentage of concurrence on ratings) between the teams was 96 percent. We finished categorizing the goals and measures in the 26 assistance agreements by reviewing all of the goals and measures to determine whether the measures in each goal statement addressed all key aspects of each goal. We drew on our knowledge of the CDFI program, community development and housing finance, and OMB’s guidance on the Results Act to make this judgmental determination and to identify instances in which measures did or did not address all key aspects of the associated goals. Finally, we reviewed the Fund’s statutory, regulatory, and other reporting and monitoring requirements, as well as an existing study of the Fund’s monitoring system, and we held discussions with Fund officials and officials at selected CDFI case studies to assess the Fund’s progress in developing systems to monitor and evaluate the progress of the CDFI awardees in meeting their performance goals. We also reviewed quarterly progress reports submitted by 19 of the 1996 awardees and held discussions with case studies officials to assess the awardees’ progress. For the BEA program and its awardees, we performed work at Fund headquarters similar to that we performed for the CDFI program, reviewing the Fund’s guidance, policies, procedures, and other materials on the awards process and discussing these issues and others related to the program with the Fund officials administering the program. For data on the banks’ performance under the program, we relied on the Fund’s status report on the activities completed by awardees as of January 1998. This report includes the status of disbursements made by the Fund for specific activities completed in accordance with the awardees’ agreements with the Fund. We conducted case studies of selected BEA awardees to explore our objectives in more depth, obtaining awardees’ perspectives on the process of applying for an award and gathering information from awardees that the Fund does not collect systematically, such as information on the incentives banks identify for participating in the program and the ways they monitor and measure the progress of their investments. We also verified with these awardees (1) the information we obtained from the Fund’s January 1998 status report and (2) the information the banks voluntarily reported to the Fund on their uses of award funds. We judgmentally selected a sample of five awardees for our case studies, ensuring that collectively they represented the full range of activities for which banks can receive awards. The five awardees and the activities on which their awards were based were as follows: Fullerton Savings and Loan, Fullerton, California (increased lending for single-family and multifamily housing); Bank of America Community Development Bank, Walnut Creek, California (increased lending for commercial real estate, multifamily housing, and small businesses); First Union Bank, Washington, D.C. (increased lending for multifamily Citibank, N.A., New York, New York (increased investments in CDFIs); and, Wells Fargo Bank of Texas, N.A., Houston, Texas (increased investments in CDFIs). To identify opportunities for improving the Fund’s management, we asked officials we had interviewed, both at Fund headquarters and at our CDFI and BEA case studies, to identify ways of improving the 1996 awards, monitoring, and performance measurement processes. We also reviewed studies of the Fund that outlined areas for improvement and determined the extent to which these areas had or had not been addressed in the 1997 process. Our overall assessment of the Fund’s strategic plan was generally based on our knowledge of the Fund’s operations and programs, as well as on other information available at the time of our assessment. Specifically, the criteria we used to determine whether the Fund’s strategic plan complied with the requirements of the Results Act were the Results Act and OMB’s guidance on developing strategic plans (OMB Circular A-11, Part 2). To make judgments about the overall quality of the plan, we used our May 1997 guidance for congressional review of strategic plans (GAO/GGD-10.1.16) as a tool. To determine whether the plan contained information on interagency coordination, we relied on the President’s Community Empowerment Board’s Federal Programs Guide. Finally, we reviewed the Fund’s fiscal year 1997 annual report and consulted with knowledgeable staff in our Accounting and Financial Management Division as part of our efforts to assess whether the Fund had adequate systems in place to provide reliable information on performance. We conducted our work at Treasury headquarters and at the offices of selected CDFI and BEA awardees throughout the country. We performed our review between July 1997 and June 1998 in accordance with generally accepted government auditing standards. We relied on data provided to us by the Fund, by awardees in both the CDFI and the BEA programs, and by CDFIs responding to our survey. We did not verify the data on award amounts, disbursements, or quarterly and annual reports for the CDFI awardees as a whole or for our case studies because our work focused on the process of developing performance goals and measures, which were themselves documented in the assistance agreements. Because our conclusions and recommendations are based on that process and the resulting goals and measures, and not on the financial or performance data, we consider them to be valid. We verified with the BEA awardees the data on award amounts, disbursements, and postaward uses of funds that we obtained from the Fund. Leslie Black-Plumeau Carolyn Boyce Diane Brooks Stan Czerwinski Patricia Farrell Donahue Elizabeth R. Eisentadt Dennis Fricke Kimberly Hutchens Bill MacBlane John McGrail Lynn Musser Hattie Poole Marilyn Rubin James Vitarello John Vocino The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the performance of the Community Development Financial Institutions (CDFI) Fund, focusing on: (1) the first round of awards in the CDFI and Bank Enterprise Award (BEA) programs and the strategic plan that the Fund developed under the Government Performance and Results Act of 1993; (2) the Fund's development of systems to measure, monitor, and evaluate the awardees' performance; and (3) BEA's impact on banks' lending and investment in CDFIs and distressed communities. GAO noted that: (1) for fiscal year 1996, the Fund complied with the CDFI Act's requirements for negotiating performance goals and measures based on the awardees' business plans; (2) moreover, these goals and measures are consistent with the CDFI program's mission of promoting economic revitalization and community development; (3) however, because the CDFI Act provides no specific guidance for evaluating performance measures, GAO applied the Results Act's standards to the goals and measures that the Fund negotiated in assistance agreements with the 1996 awardees; (4) GAO found that the Fund could improve the nature, completeness, and specificity of these goals and measures; (5) GAO's evaluation of the awardees' assistance agreements revealed an emphasis on measures of activity, rather than on measures of accomplishment; (6) as a result, the assistance agreements focus primarily on what the awardees will do, rather than on how their activities will affect the distressed communities; (7) GAO's evaluation also revealed occasional omissions of measures for key aspects of goals and widespread omissions of baseline data and information on target markets; (8) primarily because of staffing limits, the Fund has just begun to develop mandated monitoring and evaluation systems; (9) given that most awardees have just recently signed their assistance agreements, it is still too early to assess their progress; (10) the impact of the BEA program on banks' lending and investment in CDFIs and distressed communities is difficult to isolate from the impact of regulatory and other economic incentives; (11) moreover, because the Fund did not require banks to report material changes in rewarded investments, it did not have a systematic way of learning about any such changes; (12) the Fund is not authorized to address what banks do with their award funds; however, most awardees have reported that they have reinvested at least a portion of their awards in community development activities; (13) the Fund's current strategic plan contains all of the elements required by the Results Act and suggested by the Office of Management and Budget's implementing guidance; (14) however, these elements generally lack the clarity, specificity, and linkage with one another that the act envisioned; (15) in addition, the plan does not describe the relationship of its activities to similar ones in other government agencies, and it does not indicate whether or how the Fund coordinated with other agencies in developing the plan; and (16) these difficulties are similar to those experienced by other federal agencies in implementing the Results Act's requirements.
USDOT established a minority and women’s business enterprise program for its highway, airport, and transit programs by regulation in 1980. The Surface Transportation Assistance Act of 1982 contained the first statutory DBE provision for federal highway and transit programs, requiring that a minimum of 10 percent of the funds provided by the act be expended with small businesses owned and controlled by socially and economically disadvantaged individuals, unless the Secretary of Transportation determined otherwise. Nonminority women were not included as socially and economically disadvantaged individuals. The Surface Transportation and Uniform Relocation Assistance Act of 1987 continued the program and included nonminority women in the statutory definition of socially and economically disadvantaged individuals, thereby allowing states to use contracts with both minority- and women-owned businesses to meet their DBE goals. The Intermodal Surface Transportation Efficiency Act of 1991 and TEA-21 (1998) reauthorized the program, continuing the combined 10- percent provision for participation by minority-owned and nonminority- women-owned DBEs. The percentage of federal funds expended through USDOT-assisted highway and transit contracts with DBEs increased from 9.9 percent in 1983 to 12.8 percent in 1999. TEA-21 and USDOT’s regulations establish the basic eligibility requirements for participation in the DBE program. The program is limited to small businesses owned and controlled by socially and economically disadvantaged individuals. Women and members of certain minority populations, such as African-, Hispanic-, and Native-Americans and other minorities found to be disadvantaged by the Small Business Administration (SBA), are presumed to be socially and economically disadvantaged unless proved otherwise. These individuals must own at least 51 percent of thefirm and actually control its operations. To qualify as a small business, a firm must have average annual gross receipts over a 3-year period that do not exceed either (1) the applicable SBA small business size standards or (2) a USDOT-specific cap ($17.4 million). There is no legislative or administrative requirement limiting the length of time firms can participate in the program. However, DBEs become ineligible, or “graduate,” when their average annual gross receipts over a 3-year period exceed the applicable SBA small business size standards or the USDOT-specific cap. According to our survey results, most of the states and transit authorities did not have any DBEs graduate in 2000. In addition, about one-quarter of the states and transit authorities we surveyed could not provide this information. States and transit authorities are not required to track this information, and graduation is not a goal of the DBE program. Moreover, as we reported in 1994, because average annual gross receipts do not reliably indicate DBEs’ success, graduation is not a useful measure of the success of the program as a whole. USDOT administers the DBE program through the Office of the Secretary and the Department’s operating administrations, including the Federal Transit Administration (FTA) and the Federal Highway Administration (FHWA). USDOT develops program policies, instructions, and procedures; reviews and approves states’ and transit authorities’ DBE program plans; and provides technical assistance, among other things. States and transit authorities must certify that program applicants meet the eligibility criteria, reassess annually the eligibility of certified businesses, and establish overall annual goals for the participation of DBEs in their USDOT-assisted contracts. DBE participation goals are expressed as a percentage of all federal highway and transit funds expended on USDOT-assisted contracts in a fiscal year. One of the sources states and transit authorities may use to help set their overall federal DBE participation goals is data derived from disparity studies, which measure the availability of minority- and women-owned businesses compared with their utilization in contracting. States and transit authorities also use disparity studies to support state and local minority business contracting programs. The significance of disparity studies as evidence of discrimination in this context was discussed in a 1989 Supreme Court decision. In City of Richmond v. J.A. Croson Co., the Court held that state and local programs that use race or ethnicity as a factor in apportioning public contracting opportunities are subject to strict scrutiny. This means that the programs must serve a compelling governmental interest and be narrowly tailored—that is, designed to be no broader than necessary—to meet that interest. The Court found that combating racial discrimination is a compelling interest. However, it held that the city had not presented sufficient evidence of discrimination to justify its minority contracting plan. In evaluating the city’s evidence, the Court found, among other things, that the city had inappropriately relied on the disparity between the number of prime contracts awarded to minority firms and the minority population of Richmond. It stated that an appropriate disparity evaluation would compare the percentage of qualified minority contractors with the percentage of dollars actually awarded to minority businesses. While courts have favorably cited disparity studies in some cases, many courts have rejected the studies’ findings, often because of methodological weaknesses, when considering whether a compelling interest exists for state or local minority contracting programs. These decisions provide varying degrees of guidance on the data and methodology that need to be used in disparity studies to produce reliable evidence of discrimination. A 1995 Supreme Court decision had a significant impact on the federal DBE program, as well as other federal programs that use race or ethnicity as factors in decision-making. Adarand Constructors, Inc. v. Pena involved FHWA’s use of a subcontracting compensation clause in direct federal contracting to implement the DBE provision and provisions of the Small Business Act. Adarand Constructors, a nondisadvantaged contractor, initiated the litigation in 1990 after it was denied a subcontract on a federal lands highway project. In 1992, the district court held that the programs at issue were constitutional, and in 1994 the Tenth Circuit Court of Appeals affirmed that decision. In 1995, the Supreme Court set aside the Court of Appeals’ decision and sent the case back to the lower courts, directing them to apply the strict scrutiny standard and thus determine whether the programs were narrowly tailored to further a compelling governmental interest. Applying this standard, the district court held that the subcontracting compensation clause and related statutory provisions were unconstitutional in 1997. This decision was the subject of considerable discussion during the congressional debate over the reauthorization of the DBE program as part of TEA-21, which was enacted in 1998. Responding largely to the Supreme Court’s 1995 decision and congressional debate over the DBE program, USDOT issued regulations in 1999 to ensure that the DBE program is narrowly tailored. In addition, the regulations are designed to ensure nondiscrimination in the award and administration of USDOT-assisted contracts, remove barriers to the participation of DBEs in such contracts, and provide appropriate flexibility to the recipients of federal funds in establishing and providing opportunities to DBEs, among other things. In September 2000, the Tenth Circuit Court of Appeals upheld the constitutionality of the current DBE program because it found that the program served a compelling governmental interest and was narrowly tailored, largely because of structural changes in the program resulting from USDOT’s new regulations. Adarand Constructors requested that the Supreme Court review the Court of Appeals’ decision in November 2000. In March 2001, the Supreme Court agreed to hear the case. USDOT’s 1999 DBE regulations made significant changes to the DBE program. For example, the new regulations overhauled the program’s goal- setting process, including the use of race-neutral measures (e.g., technical assistance) and revised its eligibility requirements. In addition, the new regulations required that states and transit authorities develop bidders lists and unified certification programs, among other things, to make the DBE program more streamlined and efficient. However, 72 percent of the states and transit authorities responding to our survey indicated that the new regulations have made it more difficult for them to administer the program. In addition, over half of the states and transit authorities indicated that the new regulations have made it more difficult for DBEs to apply to the program. The new goal-setting process shifted the focus of the program from achieving the maximum feasible extent of DBE participation in USDOT- assisted contracting to achieving a “level playing field”—that is, the amount of participation DBEs would be expected to achieve in the absence of discrimination. For example, under the prior regulations, states and transit authorities were required to justify goals lower than 10 percent—the amount identified in the statutory DBE provision. The regulations established a direct link between the amount of participation identified in the statute and the goals set by states and transit authorities. In contrast, the new regulations require states and transit authorities to base their DBE participation goals on demonstrable evidence of the number of “ready, willing, and able” DBEs available in local markets relative to the number of all businesses “ready, willing, and able” to participate in USDOT-assisted contracts in such markets—representing the level of DBE participation expected in the absence of discrimination. The regulations outline a two-step process for goal-setting. First, states and transit authorities must establish a base figure that represents the “ready, willing, and able” DBEs in the state or transit authority’s market relative to all “ready, willing, and able” firms in that market (i.e., relative availability of DBEs). To determine the relative availability of DBEs, the new regulations require that states and transit authorities use the best available data and suggest that states and transit authorities use DBE directories and Census Bureau data, bidders lists, disparity studies, or the goal of another recipient. Second, states and transit authorities must adjust their base figure to account for other factors affecting DBEs, such as the capacity of DBEs to perform work in USDOT-assisted contracts and findings from disparity studies. According to our survey results, the most common sources used to set states’ and transit authorities’ fiscal year 2000 participation goals were DBE directories, historical utilization patterns, Census Bureau data, and bidders lists. Under the new goal-setting process, the average DBE participation goal decreased from 14.6 percent in fiscal year 1999 to 13.5 percent in fiscal year 2000. The new regulations also require that states and transit authorities meet the maximum feasible portion of their overall DBE goals using race-neutral measures rather than race-conscious measures. The prior regulations did not require the use of race-neutral measures (e.g., outreach and technical assistance), which are designed to increase contracting opportunities for all small businesses, and do not involve setting specific goals for the use of DBEs on individual contracts. A race-conscious measure is one that is focused solely on assisting DBEs. An example of a race-conscious measure is a contract goal—that is, a DBE participation goal set for a specific contract or project. While quotas are prohibited and set-asides are allowed only in the most extreme cases of discrimination, states and transit authorities must use contract goals to meet any portion of their overall goals they do not expect to meet using race-neutral measures. States and transit authorities must submit their overall DBE participation goals, including the methodology used to set the goals and the projected use of race-neutral and race-conscious measures, to USDOT for approval on an annual basis. The states and transit authorities we surveyed indicated that, on average, they used race-neutral measures to achieve slightly over one- third of their overall DBE participation goals in fiscal year 2000. The new regulations established a personal net worth cap for individuals whose ownership and control of a business determines DBE eligibility. According to USDOT, this new eligibility requirement is designed to ensure that the program is limited to firms owned and controlled by genuinely disadvantaged individuals. Prior to the new regulations, the absence of a limit on personal net worth led to criticism that wealthy individuals could benefit from the program. Under the new regulations, to qualify as economically disadvantaged, individuals who own and control DBEs must have a personal net worth that does not exceed $750,000. USDOT chose the $750,000 cap because it is a well-established standard for the SBA’s programs. According to our survey results, the number of firms that exceeded this limit and became ineligible for the DBE program in fiscal year 2000 ranged from 0 in 14 states and transit authorities to 39 in 1 state. Twenty-two percent of the states and transit authorities we surveyed reported that this information was not available. The new regulations include other changes designed to improve the effectiveness and efficiency of the DBE program. For example, states and transit authorities are now required to create and maintain a bidders list, which is a record of all firms that bid on prime and subcontracts for USDOT-assisted projects. The list must include each firm’s name, its status as a DBE or non-DBE, its years in operation, and its annual gross receipts. The list is intended to count all firms that are participating, or attempting to participate in USDOT-assisted contracts; however the regulations do not specify how often the bidders list must be updated, for example, to ensure that firms no longer available are removed from the list. USDOT believes that the bidders list is a promising tool for states and transit authorities to accurately measure the relative availability of “ready, willing, and able” DBEs when setting their DBE goals. Sixty percent of the states and transit authorities we surveyed reported that they are in the process of developing or implementing their bidders lists while 27 percent indicated that their lists are fully implemented. Eight percent of the states and transit authorities reported that they had not yet started developing their bidders lists. The remaining 5 percent of the states and transit authorities reported that their bidders lists were in some other stage of development. Another change designed to improve the efficiency of the DBE program is the requirement that states and transit authorities develop and participate in a unified certification program (UCP). A UCP provides “one-stop shopping” for DBEs because it makes all DBE certification decisions within a state. All recipients within a state must honor the certification decisions of the UCP. Prior to the new regulations, DBEs often had to obtain separate certifications from multiple recipients within one state. For example, in California there were about 60 certifying agencies throughout the state. Under the new regulations, DBEs will have to be certified by only one agency to participate in the DBE programs administered by all recipients in that state. By March 2002, the state DOT and all transit authorities within each state must sign an agreement establishing the UCP for that state and submit an implementation plan to USDOT for approval. The UCP must be fully operational no later than 18 months after USDOT approves the plan. The majority of the states and transit authorities (72 percent) we surveyed indicated that their UCPs were in some stage of development or implementation while 7 percent indicated that their UCPs were fully implemented; 14 percent reported that they had not yet started to develop their UCPs; and the remaining 6 percent noted that their UCPs were in some other stage of development. Fifty-four percent of the states and transit authorities we surveyed indicated that the new regulations had made it somewhat or much more difficult for DBEs to apply to the DBE program. This view could be attributable to the requirement for additional documentation that DBEs must now submit—specifically, documentation of the personal net worth of the individuals who own and control the firms. Furthermore, when states and transit authorities were asked to identify barriers to firms’ participation in the DBE program, the two most common barriers cited were (1) reluctance to provide personal information and (2) the time required for certification paperwork. Despite these problems, most states and transit authorities (58 percent) indicated that they believe the benefits to firms participating in the DBE program outweigh any costs. In addition, states and transit authorities reported that the new regulations made it more difficult to administer the program. For example, 59 of 82 states and transit authorities we surveyed reported that the new regulations had made it somewhat or much more difficult to administer the DBE program while 9 states and transit authorities indicated that the new regulations had made it easier to administer the program. Fourteen states and transit authorities reported that there was no change or they had no basis to judge. It is not surprising that most states and transit authorities reported that the new regulations made it more difficult to administer the DBE program, since the new regulations required that they completely overhaul their DBE goal-setting process and collect more information from DBEs and non-DBEs. One source of frustration for states and transit authorities appears to be the process for developing and approving the new DBE program plans. The new regulations required that states and transit authorities develop and submit plans for fiscal year 2000 that reflected the requirements and changes under the new regulations to USDOT by August 31, 1999—about 6 months after the effective date of the new regulations. During the approval process, USDOT sometimes sent the DBE plans back to states and transit authorities multiple times for revisions and clarifications. One state noted that even after it had worked closely with USDOT’s local office to develop its plan, USDOT headquarters twice rejected the plan. On average, it took USDOT 8 months to approve the 2000 DBE plans. According to our survey results, one state and seven transit authorities reported that their 2000 DBE plans had yet to be approved. A lack of key information prevents anyone from gaining a clear understanding of firms that participate in the DBE program and how they compare with the rest of the transportation contracting community. For example, we cannot use the information provided by the states and transit authorities we surveyed to calculate the total number of certified DBEs nationwide because of duplication in the states’ and transit authorities’ DBE directories. In addition, almost two-thirds of our survey respondents could not provide information on the annual gross receipts of DBEs or the personal net worth of the individuals who own and control DBEs— information that is used to determine firms’ eligibility for the program but is not reported to USDOT and was not readily available. Furthermore, almost 95 percent of the states and transit authorities we surveyed could not provide information on the annual gross receipts of non-DBEs and none could provide information on the personal net worth of the individuals who own and control non-DBEs. While financial information on DBEs and non-DBEs is lacking, most states and transit authorities could provide some other type of information on DBEs, such as the total number of prime contracts awarded to DBEs—information that is regularly reported to USDOT. These data indicate, among other things, that DBEs received about 7 percent of the prime contracts awarded and 2 percent of the federal dollars awarded for prime contracts in fiscal year 2000. We cannot calculate the total number of certified DBEs nationwide because of duplication in states’ and transit authorities’ DBE directories. States and transit authorities are required to maintain DBE directories that list all the DBEs they have certified. However, DBEs can be certified in multiple locations. For example, a DBE may be certified by Virginia, Maryland, Pennsylvania, and the District of Columbia. Unlike the SBA’s Small and Disadvantaged Business program, which gives a unique identification number to each certified small and disadvantaged business, the DBE regulations do not require states and transit authorities to assign unique identifiers to certified DBEs. As a result, a DBE certified with four states would be listed in four different DBE directories. Because of this duplication, aggregating the number of certified DBEs listed in states’ and transit authorities’ DBE directories would significantly overstate the number of firms certified. While we cannot provide the total number of certified DBEs nationwide, our survey results indicate that the number of certified DBEs per state and transit authority varies greatly. For example, in fiscal year 2000, the number of certified DBEs per state or transit authority ranged from 39 in the state of Maine to 3,350 in the Metropolitan Atlanta Rapid Transit Authority, with an average of 551 per state and transit authority. Although FHWA could provide information on the demographics of DBEs that obtain highway contracts, FTA could not provide comparable data. As a result, the demographics of the entire DBE community are unknown. FHWA’s data on DBE participation indicate that nonminority-women- owned businesses obtain a significant portion of contracts. Prior to 1987, states and transit authorities could not generally count contracts with nonminority-owned businesses toward DBE goals. The Surface Transportation and Uniform Relocation Assistance Act of 1987 included nonminority-women-owned businesses in the statutory definition of socially and economically disadvantaged individuals and thus allowed states and transit authorities to use contracts with both minority- and nonminority-women owned businesses to meet their DBE goals. According to FHWA’s data, nonminority-women-owned businesses have become one of the most competitive groups in the DBE community since 1987. For example, in 1999 (the latest year for which these data are available), nonminority-women-owned businesses accounted for about 48 percent of all federal highway contract dollars awarded to DBEs; minority-owned businesses (those owned by both men and women) combined accounted for about 52 percent. (See fig. 1.) FTA was unable to provide reliable data on the demographics of the DBEs that were awarded federal transit contracts, even though transit authorities must provide this information to FTA on a quarterly basis. According to FTA, it does not centrally compile this information. The majority of the states and transit authorities we surveyed (78 percent) provided sufficient data—that is, the number and value of prime contracts awarded to DBEs and non-DBEs—to determine DBEs’ participation rates in prime contracts. According to the data we obtained from these states and transit authorities, DBEs received about 7 percent of the prime contracts awarded and 2 percent of the federal dollars awarded for prime contracts in fiscal year 2000. In comparison, about 70 percent of the states and transit authorities could not provide both the number and value of subcontracts awarded to DBEs and non-DBEs—information necessary to calculate DBEs’ participation rates in subcontracts. Because DBEs are small businesses and are more likely to compete for subcontracts, which generally require fewer resources (e.g., capital, equipment, and employees) than prime contracts, the lack of subcontracting data prevents anyone from gaining a complete understanding of DBEs’ participation in transportation contracting. The data provided from about one-third of the states and transit authorities indicate that DBEs received about 33 percent of all subcontracts awarded and 26 percent of the federal dollars awarded through subcontracts in fiscal year 2000. However, because this information is based on a small number of states and transit authorities, it may not be representative and therefore should not be generalized to the entire DBE community. The participation rates of DBEs in both prime and subcontracts in fiscal year 2000 indicate that they received a relatively small percentage of federal prime and subcontracts and dollars when compared with non- DBEs. However, we do not know whether the percentage is disproportionately low. Such a determination cannot be made without an accurate measure of the availability of DBEs—that is, the number of DBEs “ready, willing, and able” to participate in prime and subcontracts compared with the number of non-DBEs. The majority of states and transit authorities responding to our survey could not provide information on the annual gross receipts of DBEs. Specifically, 60 percent of these states and transit authorities could not provide information on the annual gross receipts of the DBEs that were awarded prime or subcontracts in fiscal year 2000. Furthermore, 75 percent of the states and transit authorities could not provide information on the annual gross receipts of the DBEs that were not awarded prime or subcontracts in fiscal year 2000. While the annual gross receipts of a DBE are required to determine the firm’s eligibility for the program, this information is not reported to USDOT. The primary reason survey respondents cited for not being able to provide the information was that the information is not in an electronic database and therefore would be difficult and time-consuming to compile. The information that was provided from a limited number of states and transit authorities indicates that most DBEs’ annual gross receipts are below $5 million—well below the current USDOT-specific cap of $17.4 million. Furthermore, 85 percent of the DBEs awarded contracts in fiscal year 2000 had annual gross receipts of less than $5 million. In comparison, 94 percent of the DBEs that did not receive a contract in fiscal year 2000 had annual gross receipts of less than $5 million. However, because this information is based on only a small percentage of the states and transit authorities we surveyed, it may not be representative and therefore should not be generalized to the entire DBE community. (For more detailed information see app. II.) The majority of the states and transit authorities we surveyed could not provide information on the personal net worth of the individuals who own and control DBEs. Specifically, about 65 percent of the states and transit authorities indicated that they could not provide information on the personal net worth of the owners of DBEs that were awarded prime contracts in fiscal year 2000. Sixty-seven percent of the states and transit authorities reported that they could not provide information on the personal net worth of the owners of DBEs that were awarded subcontracts in fiscal year 2000. In addition, 81 percent of the states and transit authorities indicated that they could not provide information on the personal net worth of the owners of DBEs that were not awarded prime contracts in fiscal year 2000. Seventy-eight percent of the states and transit authorities indicated that they could not provide information on the personal net worth of the owners of DBEs that were not awarded subcontracts in fiscal year 2000. Similar to the information on a firm’s annual gross receipts, personal net worth information is required to determine a firm’s eligibility for the program but is not reported to USDOT. Since this eligibility requirement was introduced in the new regulations, states and transit authorities are just starting to collect this information. Over 60 percent of the states and transit authorities indicated that they could not provide this information because it is not electronically maintained and therefore would be difficult and time-consuming to compile and report. The information that was provided from a limited number of states and transit authorities indicates that over half of the DBEs that received prime and subcontracts in fiscal year 2000 had owners whose personal net worth was less than $250,000. Additionally, the data indicate that the personal net worth of the owners of DBEs receiving prime contracts was higher than the personal net worth of the owners of DBEs receiving subcontracts. However, because this information is based on the responses of a small percentage of all states and transit authorities, it may not be representative and therefore should not be generalized to the entire DBE community. (For more detailed information see app. II.) Currently, the financial status of DBEs cannot be compared with that of the transportation contracting community as a whole because most states and transit authorities do not collect or maintain financial information on non- DBEs. For instance, over 90 percent of the states and transit authorities responding to our survey could not provide information on the annual gross receipts of non-DBEs. The primary reason for not being able to report the information was not having it in an electronic database. The new regulations require states and transit authorities to collect information on the annual gross receipts of the non-DBEs that bid on their USDOT-assisted contracts. This information is to be included in the states’ and transit authorities’ bidders lists. According to USDOT, states and transit authorities have expressed concern about their ability to collect this information because non-DBEs have been reluctant to share this information. No survey respondent could provide information on the personal net worth of the owners of non-DBEs that were awarded prime or subcontracts in fiscal year 2000. The majority of the states and transit authorities (61 percent) indicated that they do not currently collect this information or plan to do so in the future. Only 8 percent reported that they plan to collect this information in the future. States and transit authorities are not required to collect information on the personal net worth of the owners of non- DBEs. There are numerous sources that could contain information relevant to whether discrimination limits the ability of DBEs to compete for USDOT- assisted contracts, including studies of lending, bonding, and business practices affecting the formation and competition of minority firms; state and local disparity studies; discrimination complaints; and relevant court cases. We focused our review on court cases involving the federal DBE program since the Supreme Court’s 1995 decision in Adarand Constructors, Inc. v. Pena, transportation-specific disparity studies published between 1996 and 2000; and written complaints of discrimination filed by DBEs with states, transit authorities, and USDOT. We focused on these sources because they are directly related to transportation contracting and the federal DBE program. However, we did not address whether the DBE program satisfies the requirements of strict scrutiny and is therefore constitutional. In our review, we found the following: The courts that have considered the constitutionality of the federal DBE program under the standard articulated in the Supreme Court’s 1995 decision in the Adarand case have concluded that discrimination adversely affects DBEs. All 14 studies we reviewed found that there were disparities between the availability and utilization of minority- and women-owned business enterprises (MBE/WBEs) in transportation contracts. Taken as a whole, these studies suggest that disparities exist. However, none provide reliable evidence of disparity because the limited data used to calculate disparities, compounded by methodological weaknesses, create significant uncertainties about the studies’ findings—that is, they could result in either an overstatement or an understatement of MBE/WBEs’ availability and utilization. USDOT does not systematically track information on the discrimination complaints filed by DBEs—information that could shed light on the existence of discrimination against DBEs. In addition, a number of factors are often cited by agency officials and representatives from both industry and minority associations as limiting DBEs’ ability to compete for contracts. These factors include a lack of working capital and limited access to bonding. However, there was little agreement among the officials we contacted about whether these factors are attributable to discrimination or are barriers that all small businesses face. In order to uphold a program, such as the federal DBE program, that uses race or ethnicity as a criterion for decision-making, a court must find sufficient evidence of discrimination to conclude that the program serves a compelling governmental interest. Therefore, cases considering the constitutionality of the federal DBE program can indicate whether discrimination adversely affects DBEs’ participation in transportation contracting. The courts that have addressed the DBE program under the standard articulated by the Supreme Court’s 1995 decision in Adarand Constructors, Inc. v. Pena (discussed on page 13 of this report) have found that the evidence of discrimination presented was sufficient for them to conclude that the program serves a compelling governmental interest, specifically, remedying the effects of discrimination against DBEs. Most recently, in its review of the DBE program in Adarand, the Tenth Circuit Court of Appeals concluded that discrimination adversely affects both the formation of qualified minority subcontracting businesses and their ability to successfully compete for highway construction subcontracts. On the basis of the evidence presented, the court found that discrimination by prime contractors, unions, and lenders impedes the formation of qualified minority businesses in the subcontracting market nationwide. The court also acknowledged the causal link between the availability of capital and the ability to implement public works construction projects and found that the studies cited by the government strongly supported a finding of discrimination in lending. For example, it cited a survey of 407 business owners in the Denver area that found significant differences in the loan denial rate for white, African-American, and Hispanic business owners, even after controlling for other factors like size and net worth. The court also addressed barriers to competition by existing minority businesses. Citing congressional hearings and statistical evidence, among other things, the court found that minority businesses are often excluded by business networks of prime and subcontractors from opportunities to bid on construction projects. The court also discussed bonding requirements, finding another barrier to competition. For example, it cited a Louisiana study finding that minority firms were nearly twice as likely to be rejected for bonding; three times more likely to be rejected for bonding in amounts over $1 million; and, on average, charged higher rates for the same bonding policies than white firms with the same experience. Similarly, the court accepted evidence of suppliers’ withholding price discounts from minority subcontractors, thus driving up their bids. In light of this evidence, the court rejected Adarand Constructors’ argument that minority businesses face the same problems as all new businesses, regardless of the race of the owners. Finally, the court considered disparity studies conducted by state and local governments. In doing so, the court accepted the government’s finding, based on a review of disparity studies, that minority construction subcontracting firms received 87 cents for every dollar that they would be expected to receive given their availability. The court also acknowledged the potential for weaknesses in the data and methodology used in disparity studies and stated that particular evidence undermining the reliability of specific studies would be relevant to a determination regarding discrimination. However, it noted that Adarand Constructors had not provided it with evidence undermining the studies’ reliability. Furthermore, the court found that Adarand Constructors had failed to introduce credible, specific evidence to refute the government’s showing of a compelling interest. As a result, it held that there was sufficient evidence of discrimination to justify the use of racial and ethnic criteria in transportation contracting. Fourteen recent, transportation-specific disparity studies concluded that disparities existed between the utilization of MBE/WBEs in transportation contracts and availability of these firms in the marketplace. Numerous state and local governments have used disparity studies to support their minority contracting programs and in setting their federal DBE goals. For example, about 30 percent of the states and transit authorities we surveyed reported that they used a disparity study to help set their fiscal year 2000 DBE participation goals. However, our review of the 14 disparity studies found that the limited data used to calculate disparities, compounded by methodological weaknesses, create uncertainties about the studies’ findings. Rather than discuss the limitations of each study specifically, we have chosen to discuss some of the more common problems we found. While not all studies suffered from every problem, each suffered from enough problems to make its findings questionable. We recognize that there are difficulties inherent in conducting disparity studies and that such limitations are common to social science research; however, the disparity studies we reviewed did not sufficiently address such problems or disclose their limitations. It is not clear what conclusions a court would draw about the studies’ findings. The studies we reviewed relied on a disparity ratio—that is, a comparison of the availability of MBE/WBEs to their utilization in contracts—as an indicator of discrimination. However, the data necessary to properly calculate such ratios—complete and accurate lists of MBE/WBEs’ availability and utilization—are often lacking. An availability list should include all qualified, willing, and able firms in the relevant market area, grouped by industry subspecialties and by MBE/WBE or non-MBE/WBE status. A utilization list should include all firms in the relevant market area that were awarded prime and subcontracts, grouped by industry subspecialty and MBE/WBE or non-MBE/WBE status. Because these data are often lacking, some proxies (i.e., substitute information) have been used to calculate disparity ratios. To develop proxies of availability, the disparity studies we reviewed used sources including Census Bureau data, directories or other listings of firms, prequalification lists, and/or bidders lists. These could be useful data sources. However, all of these data sources have shortcomings, whether used separately or in combination, that must be taken into account when using them as proxies for availability. Such shortcomings would result in availability lists that could either under- or overstate the number of firms available for transportation contracting. The limitations of using these data sources as proxies for availability include the following: Census Bureau data cannot adequately indicate whether a firm is truly available, that is, whether it has the qualifications, willingness, or ability to complete contracts. However, in using Census Bureau data, the studies depicted all operational firms as available for contracting. Some studies attempted to account for the qualifications of firms by including only firms in the relevant two-digit Standard Industry Classification (SIC) codes in their availability lists. Using a finer degree of distinction (e.g., classification by the four-digit SIC code level) would help to ensure that firms are similar enough for comparison. For example, some studies used the two-digit SIC code for heavy construction, a category that includes firms as diverse as general contractors for highway construction and general contractors for radio tower construction. Directories and other listings do not contain information on firms’ qualifications, willingness, or abilities. This could result in an overstatement of how many firms are available for transportation contracting. In addition, some of the data obtained from directories and listings were inaccurate. For example, some of the disparity studies we reviewed indicated that as many as 16 percent of the firms included in the directories and listings were unreachable because of such problems as disconnected telephones, wrong telephone numbers, incorrect addresses, or dissolution of the firms. Prequalification and bidders lists may be better sources of availability than Census Bureau data or directories because they better approximate firms’ qualifications, willingness, and ability to compete for contracts. However, the mechanisms used by states and transit authorities to compile them may limit their reliability. In the studies we reviewed, we found four problems. First, some studies we reviewed used bidders and prequalification lists that were updated infrequently or had no mechanism to ensure that firms no longer available were removed from the list. For example, one study used a list that never removed firms, increasing the risk that it contained firms no longer in business in the relevant market area. Second, some studies we reviewed used bidders or prequalification lists that were maintained for multiple city agencies, ranging from school districts to port authorities. Businesses qualified to perform school district work may not be qualified to perform port authority work. Third, the lists grouped all potential firms together, failing to take into account their industry subspecialty and capacity. Because of these problems, availability lists based on this information would overstate the number of firms that were qualified, willing, and able to perform transportation contracts. Finally, prequalification and bidders lists could under represent capable firms. Firms may refrain from participating because of perceived or actual barriers. For example, one study we reviewed surveyed firms and found that only 22 percent of those firms that expressed an interest in contracting with the transit agency had actually attempted to obtain such work in the past. The disparity studies we reviewed made few efforts to mitigate the problems with using these data sources as proxies for availability, nor did they disclose the limitations of their use. For example, the disparity studies did not sufficiently account for the lack of information on firms’ qualifications when the availability lists were developed. One aspect of a firm’s qualifications is its capability to handle transportation contracting. Some studies used average yearly revenue as a proxy for capability. However, revenue does not adequately explain the differences in firms’ capability. For example, two firms could have similar yearly revenues, but one firm might have performed 100 small contracts throughout the year because it did not have the capacity to perform large contracts, whereas the second firm might have performed two very large contracts. If revenue were used as a proxy for capability, these two firms would be considered equivalent. In addition to determining the availability of firms, disparity studies must measure the utilization of MBE/WBEs to determine if disparities in contracting exist. This requires an analysis of both the number and dollar amount of contracts awarded to MBE/WBEs and non-MBE/WBEs. Such measurement is difficult because some states and transit authorities have incomplete records of the prime and subcontracts they have awarded. For example, several studies we reviewed did not include any analyses of subcontracting and therefore may understate the utilization of firms. Because MBE/WBEs are more likely to be awarded subcontracts than prime contracts, MBE/WBEs in particular may appear underutilized when the focus remains on prime contractor data. Furthermore, although some studies did include calculations based on the number of contracts, all but two based their determination of disparities on only the dollar amounts of contracts. Because MBE/WBEs tend to be smaller than non-MBE/WBEs, they are often unable to perform larger contracts. Therefore, it would appear that they were awarded a disproportionately smaller amount of contract dollars. A more complete indicator of utilization would consider both the dollar amount and the number of contracts awarded or to control for differences in contract dollar amounts. In March 2001, USDOT advised states and transit authorities that disparity studies used to set their DBE participation goals should be reliable. While pointing out that all or part of a disparity study pertaining to a local market area could provide a rich source of information for the goal-setting process, USDOT did not explain how states and transit authorities could evaluate the reliability of such studies. USDOT’s guidance does not, for instance, caution against using studies that contain the types of data and methodological problems that we identified above. Without explicit guidance on what makes a disparity study reliable, states and transit authorities risk using studies that may not provide accurate information in setting their DBE goals. USDOT receives written complaints of discrimination from DBEs but does not systematically track or analyze information on these complaints. As a result, this information is not readily available to shed light on the absence or presence of discrimination against DBEs. USDOT could not provide the total number of written complaints filed by DBEs for two reasons. First, while USDOT’s Office of Civil Rights (DOCR) records the complaints and assigns identification numbers before routing them to FTA or FHWA for investigation, DOCR’s records may not include complaints filed directly with those agencies. Second, DBEs may file complaints of discrimination under the DBE regulations or regulations issued under title VI of the Civil Rights Act of 1964; however, DOCR does not record which title VI complaints are filed by DBEs. Similarly, FTA does not separately track the title VI complaints filed by DBEs. Because of these two problems, information provided by USDOT would likely understate the number of complaints of discrimination filed by DBEs. In addition, USDOT could not provide the total number of investigations launched as a result of the written discrimination complaints filed by DBEs or information on the outcomes of these investigations. In order to determine whether the discrimination complaints filed by DBEs have merit, the number of investigations launched and the outcomes of the investigations are critical pieces of information. USDOT officials stated they do not track the number of investigations of written discrimination complaints filed by DBEs or the number of times discrimination was found through their investigations. To gather this type of information, USDOT officials stated one would need to go through each case file individually— nearly 100 over the last several years, not including title VI complaints. We also asked the states and transit authorities we surveyed about written discrimination complaints filed by DBEs in fiscal years 1999 and 2000. Eighty-one percent of the respondents reported that they had not received any written discrimination complaints filed by DBEs during this period. Nineteen percent of the states and transit authorities reported that they had received a total of 31 written discrimination complaints filed by DBEs in 1999 and 2000. Of the 31 complaints filed, 29 had been investigated. Four of these investigations resulted in findings of discrimination. While the number of complaints filed by DBEs with states and transit authorities may seem low, it is important to note that DBEs that believe they have been the victims of discrimination have several options and may have elected to pursue action elsewhere. For example, a DBE could file a complaint with the responsible state or transit authority, USDOT, and/or the courts. In addition, USDOT officials stated that the number of written discrimination complaints filed (at any level) probably understates the level of discrimination for two reasons. Specifically, DBEs may choose not to file complaints because they believe the process is too time-consuming or burdensome or because they fear retribution (i.e., they would be denied future work). Other factors may also limit the ability of DBEs to compete for USDOT- assisted contracts. However, there was little agreement among the officials we spoke with as to whether these factors are due to discrimination or the nature of small businesses. According to our survey results, 80 percent of the states and transit authorities responding had not conducted any type of analysis on this subject. In addition, neither USDOT, nor SBA, nor the industry groups we contacted had conducted any type of study on factors that may limit the ability of DBEs to obtain contracts. The industry officials we spoke with often cited such factors as contract bundling; limited access to bonding, working capital, and credit; and prequalification requirements. The most common factors cited as limiting DBEs’ ability to compete for contracts are a lack of working capital and limited access to credit and bonding. For example, according to an association representing small minority-owned businesses, DBEs frequently lack the capital needed to finance jobs without drawing on credit and are denied credit because they lack sufficient cash flow. Since these factors are widely perceived as limiting the ability of DBEs to compete for contracts, USDOT has established a number of services, including short-term lending and bonding assistance, to help overcome these barriers. Another factor often cited as a barrier to DBEs’ ability to compete for contracts is contract bundling. Contract bundling is the consolidation of two or more procurement requirements previously provided or performed under separate, smaller contracts into a solicitation of offers for a single contract. The resulting contract is likely to be unsuitable for award to small businesses, such as DBEs, because of (1) the diversity, size, or specialized nature of the elements of the performance specified; (2) the aggregate dollar value of the anticipated award; (3) the geographic dispersion of contract performance sites; or (4) any combination of these three factors. USDOT officials stated that they believe contract bundling is one of the largest barriers for DBEs in competing for transportation contracts. GAO recently reported that there is limited government-wide data on the extent of contract bundling and its effect on small businesses. Prequalification requirements are also cited as a barrier for DBEs. Most states require that firms competing for prime contracts be prequalified, meaning they must prove to the state that they are capable of performing contracts. For example, firms must show that they have an adequate line of credit and are bonded. According to USDOT officials, these requirements can hurt DBEs because the firms may not have the working capital and access to credit required for prequalification. Several measures could be used to help determine the impact of the DBE program. TEA-21 directed us to analyze: the impact of the DBE program on costs, including the costs of the impact of the DBE program on competition and the creation of jobs; the impact of discontinuing federal or non-federal DBE programs on DBEs. USDOT, states and transit authorities incur costs in implementing and administering the DBE program. USDOT estimates that it incurred about $6 million in costs (including salaries and training expenses) to administer the DBE program for highways and transit authorities in fiscal year 2000. Sixty-nine percent of the states and transit authorities responding to our survey estimated that they incurred a total of about $44 million in costs, including certification costs, to administer the DBE program in fiscal year 2000. The costs incurred ranged from a high of $4.5 million to a low of about $10,000. In addition, 13 states and transit authorities incurred a total of about $250,000 in litigation costs in fiscal year 2000 that they attributed to the federal DBE program. Although it has been asserted that the DBE program increases the costs of contracting (referred to as additional construction costs), 99 percent of the states and transit authorities we surveyed had not conducted a study or analysis to determine whether the DBE program has an impact on their contract costs. USDOT has also not conducted such an analysis. Almost none of the states and transit authorities responding to our survey have analyzed the impact of the DBE program on competition and the creation of jobs. Nor has USDOT conducted this type of analysis. According to USDOT officials and representatives from transportation associations, the DBE program does not create jobs; rather it shifts jobs to individuals who might not receive the jobs otherwise. As USDOT officials noted, USDOT-assisted contracts will be let regardless of the DBE program, and the program encourages greater racial and gender diversity within transportation contracting. However, there is less agreement about the effects of the program on competition. Officials from USDOT and a minority business group stated that the DBE program does not hurt competition, noting that the DBE program does not use quotas and that DBEs must compete with non-DBEs to receive USDOT-assisted contracts. Moreover, these officials commented that the DBE program enhances competition because it encourages greater participation by more firms. In contrast, representatives from transportation associations believe that the DBE program stifles competition in certain subcontracting areas (e.g., guardrail work) where there is an overconcentration of DBEs. Because of this overconcentration of DBEs, according to the transportation associations, non-DBEs do not have an opportunity to work in those fields. Although USDOT does not have data indicating that overconcentration is a serious, nationwide problem, the new regulations authorized states and transit authorities to remedy situations in which an overconcentration of DBEs is limiting non-DBEs’ ability to compete for contracts, such as varying the use of contract goals in these areas. Limited data prevent a thorough assessment of the impact of suspending or repealing (discontinuing) federal or nonfederal DBE programs on DBEs’ participation in transportation contracting. As evidence that the DBE program is needed, supporters often cite statistics on DBEs’ participation in transportation contracting after minority- and women-owned business contracting programs are discontinued. An example used during the congressional debate preceding the passage of TEA-21 was the effect of discontinuing the state of Michigan’s minority business contracting program in 1989. According to evidence cited during the debate, within 9 months of the suspension, the proportion of state highway dollars awarded to minority-owned businesses had dropped from 5 percent to 0 percent, while the proportion of state highway dollars awarded to women-owned businesses had declined from about 10 percent to 1 percent. Moreover, these new low rates of participation in state transportation contracting by minority-and women-owned businesses were contrasted with these firms’ participation rates in USDOT-assisted contracts, which were significantly higher. USDOT has not conducted studies or analyses measuring the impact of discontinuing federal or nonfederal DBE programs. Most states and transit authorities that participated in federal DBE programs or nonfederal minority business enterprise and women business enterprise (MBE/WBE) contracting programs that were discontinued could not provide data that would allow us to thoroughly evaluate the impact of that action. For example, we identified one state and one transit authority that had discontinued their federal DBE programs as a result of a court order. However, only the state could provide participation data that would allow us to evaluate the impact of discontinuing the federal DBE program. We also identified 10 states and transit authorities that had participated in nonfederal MBE/WBE programs that were discontinued prior to 2000. Only one state could provide sufficient data for us to evaluate the impact of the action. Conversely, officials from six states and transit authorities, including Michigan, told us that participation data for minorities and women in state transportation contracting for the years immediately before and after the discontinuance of their nonfederal MBE/WBE programs were not available. In addition, few of the states and transit authorities could provide equivalent data on non-MBE/WBEs. This information is important to determine whether changes in MBE/WBEs’ participation rates in state transportation contracting were similar to the changes in the participation rates of non-MBE/WBEs or unique to the MBE/WBE community. Consequently, we could not evaluate the impact of discontinuing these programs. Two states—Minnesota and Louisiana—were able to provide sufficient data to assess the impact of discontinuing a federal and nonfederal program, respectively. We measured DBEs’ and MBE/WBEs’ participation using two indicators—(1) the number of transportation contracts awarded and (2) the dollar amounts awarded through those contracts. The participation data from these states suggest that discontinuing these programs had a negative impact on DBEs’ and MBE/WBEs’ participation in transportation contracting. For example, in Minnesota, DBEs’ participation in federal transportation contracting remained relatively stable from 1995 to 1998. However, after the discontinuance of Minnesota’s federal DBE program in 1998, DBEs’ participation in federal transportation contracting dramatically declined. (See fig. 2.) Similarly, the data provided by Louisiana indicate that MBE/WBEs’ participation in transportation contracting declined after Louisiana’s nonfederal program was discontinued. As shown in figure 3, MBE/WBEs’ participation in state transportation contracting increased from 1992 to 1995. In 1996, the year the nonfederal program was discontinued, the participation rate of MBE/WBEs in state transportation contracting dropped and continued to decline over the next 4 years. An official from Louisiana attributed the decline in MBE/WBEs’ participation in state transportation contracting to the removal of affirmative action requirements on state funded projects and the realization by contractors that efforts to include MBE/WBEs were no longer necessary. The Congress identified and directed us to collect information that would shed light on the impact of the DBE program across the nation—including information on who benefits from the program, the financial status of the DBE community compared with that of the non-DBE community, and degree to which DBEs participate in transportation contracting. However, much of this information is not readily available from USDOT, states and transit authorities, and industry groups. Without this information it is impossible to define the universe of DBEs, compare them with the transportation contracting community as a whole or gain a clear understanding of the overall impact of the DBE program. In some cases, USDOT has mechanisms in place, such as its quarterly reporting requirement, that could be used to collect additional information, including the annual gross receipts of DBEs and non-DBEs as well as non-DBEs’ participation in subcontracting. In other cases, new mechanisms to collect or process the information are needed such as a method for determining the total number of certified DBEs nationwide. USDOT could also do more to analyze the information that is currently collected. By not systematically tracking and evaluating the total number of discrimination complaints filed by DBEs, the number of investigations launched, and the outcomes of the investigations, USDOT misses an opportunity to obtain information that could be used to identify trends and problem areas that may need attention. USDOT could also identify ways to improve the effectiveness of its own policies and guidance to states and transit authorities, and ultimately DBEs, by collecting and analyzing the information that the Congress has identified. Such information would help USDOT contribute to an informed congressional debate on the impact of the DBE program in connection with its reauthorization in 2003 and more effectively administer the program. USDOT could also look for ways to provide more guidance to the states and transit authorities that are implementing the DBE program. Specifically, USDOT’s new regulations put a mechanism in place for setting DBE goals and identified Census Bureau data and DBE directories, bidders lists, and disparity studies as data sources that states and transit authorities could use in setting these goals. However, in our review of disparity studies, we identified problems with these data sources that should be avoided or mitigated to help ensure that goals set by states and transit authorities are based on the level of DBE participation expected in the absence of discrimination—specifically, a level consistent with the availability of ready, willing, and able DBEs in the relevant market. USDOT provided examples of how to set DBE goals in its regulations, but has issued minimal guidance to the states and transit authorities on how to avoid the types of data and methodological problems we identified and ensure that the data sources used to set goals are as reliable as possible. USDOT could provide additional guidance to help states and transit authorities carry out the program. To assist USDOT in administering the DBE program and to help inform the Congress about the impact of the program, we recommend that the Secretary of Transportation take the following steps: Develop and implement a method for states and transit authorities to assign unique identification numbers to DBEs so that the total number of DBEs certified nationwide can be determined. Amend the quarterly reporting requirements for states and transit authorities to include information on the annual gross receipts of DBEs and non-DBEs and the number and dollar amount of the subcontracts awarded to non-DBEs. This information could be used to gain a more complete understanding of the participation rate of DBEs in subcontracting and of their financial status compared with other transportation contracting firms. Furthermore, USDOT should compile, analyze, and publish (in aggregate format) the information collected in the quarterly reports. Compile and analyze data on written complaints of discrimination filed by DBEs with USDOT in order to (1) determine trends in the number and types of complaints filed and (2) identify problem areas that require action. Periodically compile information on DBEs, through a survey or other appropriate mechanism, to better understand the types of programs needed to assist these firms. To better assist states and transit authorities in implementing the DBE program and help ensure that DBE participation goals reflect the availability of ready, willing, and able DBEs in the relevant market, we recommend that the Secretary of Transportation provide specific guidance to states and transit authorities on strategies to mitigate the potential problems associated with using Census Bureau data and DBE directories, disparity studies, and bidders lists to set their DBE goals. We recognize that the implementation of these recommendations may result in some additional costs for USDOT, states, and transit authorities. However, given existing data collection requirements and the benefits associated with these recommendations, we believe such costs are warranted. We provided USDOT with a draft of this report for review and comment. On May 1, 2001, the Assistant Secretary for Administration responded for USDOT. USDOT did not comment on our recommendations. Instead, USDOT offered comments to clarify the role of disparity studies in the DBE program, the evidentiary value of disparity studies, the need for states and transit authorities to use the best available data in DBE goal-setting, and the status of DBE and non-DBE participation data. During recent meetings and discussions, USDOT provided similar comments, which we considered and incorporated where appropriate. Therefore, we believe that the majority of USDOT’s comments are already reflected in the report. USDOT’s comments and our responses are located in Appendix IV. We conducted our review from August 2000 through April 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to congressional committees with responsibilities for the activities discussed in this report; the Honorable Norman Y. Mineta, Secretary of Transportation; the Honorable Mitchell Daniels, Director of the Office of Management and Budget; Hiram Walker, Acting Deputy Administrator, Federal Transit Administration; and Vincent F. Schimmoller, Deputy Executive Director, Federal Highway Administration. We will make copies available to others upon request. If you or your staff have any questions about this report, please call me at (202) 512-2834. Key contributors to this report are listed in appendix V. The following is the text of the section of the Transportation Equity Act for the 21st Century (1998) requiring GAO’s study. (K) the impact of the requirement of paragraph (1), and any program carried out to comply with paragraph (1), on competition and the creation of jobs, including the creation of jobs for socially and economically disadvantaged individuals. GAO’s Survey Instrument and Overall Results Survey of State DOTs and U.S. Department of Transportation Disadvantaged Business Enterprise Programs U.S. General Accounting Office 441 G Street, NW Washington, D.C. 20548-0001 The Transportation Equity Act for the 21st Century required the U.S. General Accounting Office (GAO) to examine the Department of Transportation’s Disadvantaged Business Enterprise (DBE) program. We would prefer to have data based on the federal fiscal year (FY) (October 1 to September 30). Please indicate the way the data from your agency will be provided: (N=83) As part of our study of the DBE program, we are surveying Departments of Transportation (DOTs) in each of the 50 states, the District of Columbia, and Puerto Rico and selected Transit Authorities. We recognize that there are great demands on your time; however, your cooperation is critical to our ability to provide current and complete information to the Congress. This questionnaire asks for information about the federal DBE program that your agency administers and the firms in the program. For definitions of terms used throughout this questionnaire, please see U.S. DOT’s regulations on the DBE program. Please complete and mail your questionnaire within three weeks of receipt. If the enclosed envelope is misplaced, the questionnaire should be returned to: Nikki Clowers U.S. General Accounting Office 441 G Street, NW, Mail Room 6K17R Washington, D.C. 20548 If you have any questions, please contact Nikki Clowers at [email protected] or (202) 512-4010. 1.What are your total DBE 3.What race-neutral programs did your participation goals for FY 1999 – FY 2001, and for FY 2000 and FY 2001, your goals to be achieved through race-conscious and race-neutral programs? (Please indicate the percentage to be achieved through each type of program and the total percentages.) 14.6% (avg.) (N=67) 8.6% (avg.) (N=70) 5.4% (avg.) (N=73) 13.5% (avg.) (N=80) 8.2% (avg.) (N=69) 5.2% (avg.) (N=73) 13.1% (avg.) (N=78) 2.Which of the following sources were 4.How many certified DBE firms were used to set your FY 2000 DBE participation goal? (Check all that apply.) N=82 available (i.e., in your database or directory) to your agency in FY 1999 and FY 2000? (Enter number. If none, enter 0.) FY 1999: 559.4 (avg.) (N=78) FY 2000: 551.5 (avg.) (N=81) 6.1% DBE goal(s) from another 26.8% Other Please specify: _____________________________ _____________________________ _____________________________ 5.Please indicate the number of DBE firms that were awarded prime contracts through your agency, the number of prime contracts that were awarded to these firms, and the total value of these prime contracts for FY 1999 and FY 2000. (Enter numbers and dollar amounts. If none, enter 0.) Total value of prime contracts awarded to these DBE firms # of Firms: 19.1 (avg.) # of Contracts: 40.4 (avg.) $ 8,203,394 (avg.) (N=70) (N=79) (N=78) # of Firms: 18.4 (avg.) # of Contracts: 37.4 (avg.) $ 6,585,338 (avg.) (N=71) (N=77) (N=76) agency, the number of subcontracts that were awarded to these firms, and the total value of these subcontracts for FY 1999 and FY 2000. (Enter numbers and dollar amounts. If none, enter 0.) Total value of subcontracts awarded to these DBE firms # of Firms: 66.9 (avg.) # of Subcontracts: 254.5 (avg.) $ 27,006,958 (avg.) (N=72) N=79) (N=79) # of Firms: 61.6 (avg.) # of Subcontracts: 235.1 (avg.) $ 24,427,942 (avg.) (N=71) (N=79) (N=79) provided information for Part 7b. Two agencies gave estimates for Part 7a. One agency did not answer question. 7a.What were the annual gross receipts of the DBE firms that were awarded prime and/or subcontracts through your agency in FY 1999? Use the DBE firms’ most recent certification or recertification to determine annual gross receipts. If you are not able to provide this information, please answer Question 7b. (Enter the number of firms for each category of annual gross receipts. If none, enter 0.) # of Firms: 17.5 (avg.) # of Firms: 16.1 (avg.) # of Firms: 4.8 (avg.) # of Firms: 1.2 (avg.) (N=29) (N=30) (N=30) (N=29) the above information, please indicate the reason(s). (Check all that apply.) 15.4% Information is not collected. 15.4% Information is being collected, but is not yet available. 11.5% Our agency relies on the certification of other jurisdictions. 7.7% Information is verified during certification and recertification, but it is not retained. 61.5% Information is not maintained in an electronic database, and would be difficult and/or time-consuming to report. 19.2% Other. Please explain: (60%) provided information for Part 8b. Two agencies gave estimates for Part 8a. 8a. What were the annual gross receipts of the DBE firms that were awarded prime and/or subcontracts through your agency in FY 2000? Use the DBE firms’ most recent certification or recertification to determine annual gross receipts. If you are not able to provide this information, please answer Question 8b. (Enter the number of firms for each category of annual gross receipts. If none, enter 0.) Annual Gross Receipts of DBE Firms That Were Awarded Prime Contracts and/or Subcontracts in FY 2000 $1,000,000 to $5,000,000 $5,000,001 to $10,000,000 $10,000,001 to $16,600,000 # of Firms: 15.6 (avg.) # of Firms: 14.7 (avg.) # of Firms: 4.3 (avg.) # of Firms: 1.2 (avg.) (N=33) (N=33) (N=33) (N=31) the above information, please indicate the reason(s). (Check all that apply.) 12.0% Information is not collected. 24.0% Information is being collected, but is not yet available. 12.0% Our agency relies on the certification of other jurisdictions. 6.0% Information is verified during certification and recertification, but it is not retained. 64.0% Information is not maintained in an electronic database, and would be difficult and/or time-consuming to report. 12.0% Other. Please explain: (76%) provided information for Part 9b. One agency did not answer question. 9a.What were the annual gross receipts of the DBE firms that were not awarded prime or subcontracts through your agency in FY 1999? Use the DBE firms’ most recent certification or recertification to determine annual gross receipts. If you are not able to provide this information, please answer Question 9b. (Enter the number of firms for each category of annual gross receipts. If none, enter 0.) Annual Gross Receipts of DBE Firms That Were Not Awarded Prime Contracts or Subcontracts in FY 1999 $1,000,000 to $5,000,000 $5,000,001 to $10,000,000 $10,000,001 to $16,600,000 # of Firms: 122.2 (avg.) # of Firms: 37.2 (avg.) # of Firms: 6.3 (avg.) # of Firms: 2.5 (avg.) (N=19) (N=19) (N=19) (N=19) the above information, please indicate the reason(s). (Check all that apply.) 22.2% Information is not collected. 14.3% Information is being collected, but is not yet available. 9.5% Our agency relies on the certification of other jurisdictions. 4.8% Information is verified during certification and recertification, but it is not retained. 54.0% Information is not maintained in an electronic database, and would be difficult and/or time-consuming to report. 15.9% Other. Please explain: (75%) provided information for Part 10b. One agency gave estimates for Part 10a. 10a. What were the annual gross receipts of the DBE firms that were not awarded prime or subcontracts through your agency in FY 2000? Use the DBE firms’ most recent certification or recertification to determine annual gross receipts. If you are not able to provide this information, please answer Question 10b. (Enter the number of firms for each category of annual gross receipts. If none, enter 0.) Annual Gross Receipts of DBE Firms That Were Not Awarded Prime Contracts or Subcontracts in FY 2000 $1,000,000 to $5,000,000 $5,000,001 to $10,000,000 $10,000,001 to $16,600,000 # of Firms: 102.3 (avg.) # of Firms: 33.2 (avg.) # of Firms: 5.9 (avg.) # of Firms: 2.5 (avg.) (N=21) (N=21) (N=20) (N=20) 21.0% Information is not collected. 17.4% Information is being collected, but is not yet available. 9.7% Our agency relies on the certification of other jurisdictions. 4.8% Information is verified during certification and recertification, but it is not retained. 54.8% Information is not maintained in an electronic database, and would be difficult and/or time-consuming to report. 12.9% Other. Please explain: agencies (65%) provided information for Part 11b. One agency did not answer question. 11a. What was the personal net worth of individuals who own and control DBE firms that were awarded prime contracts through your agency in FY 2000? Use the DBE firms’ most recent certification or recertification to determine personal net worth. We are aware that you may not have personal net worth information for all DBE firms; however, please provide the information that is available. If you cannot provide any personal net worth information, please answer Question 11b. (Enter the number of firms for each category of personal net worth. If none, enter 0.) Personal Net Worth of Individuals Who Own and Control DBE Firms That Were Awarded Prime Contracts in FY 2000 $100,000 to $250,000 $250,001 to $500,000 $500,001 to $750,000 # of Firms: 1.4 (avg.) # of Firms: 1.9 (avg.) # of Firms: 1.8 (avg.) # of Firms: 1.0 (avg.) (N=24) (N=24) (N=25) (N=24) 5.6% Information is not collected. 16.7% Information is being collected, but is not yet available. 13.0% Our agency relies on the certification of other jurisdictions. 9.3% Information is verified during certification and recertification, but it is not retained. 66.7% Information is not maintained in an electronic database, and would be difficult and/or time-consuming to report. 9.3% Other. Please explain: agencies (67%) provided information for Part 12b. 12a. What was the personal net worth of individuals who own and control DBE firms that were awarded subcontracts through your agency in FY 2000? Use the DBE firms’ most recent certification or recertification to determine personal net worth. We are aware that you may not have personal net worth information for all DBE firms; however, please provide the information that is available. If you cannot provide any personal net worth information, please answer Question 12b. (Enter the number of firms for each category of personal net worth. If none, enter 0.) Personal Net Worth of Individuals Who Own and Control DBE Firms That Were Awarded Subcontracts in FY 2000 $100,000 to $250,000 $250,001 to $500,000 $500,001 to $750,000 # of Firms: 11.9 (avg.) # of Firms: 8.8 (avg.) # of Firms: 6.1(avg.) # of Firms: 3.7 (avg.) (N=27) (N=26) (N=27) (N=26) 5.4% Information is not collected. 17.9% Information is being collected, but is not yet available. 12.5% Our agency relies on the certification of other jurisdictions. 8.9% Information is verified during certification and recertification, but it is not retained. 67.9% Information is not maintained in an electronic database, and would be difficult and/or time-consuming to report. 7.1% Other. Please explain: (81%) provided information for Part 13b. One agency did not answer question 13. 13a. What was the personal net worth of individuals who own and control DBE firms that were not awarded prime contracts through your agency in FY 2000? Use the DBE firms’ most recent certification or recertification to determine personal net worth. We are aware that you may not have personal net worth information for all DBE firms; however, please provide the information that is available. If you cannot provide any personal net worth information, please answer Question 13b. (Enter the number of firms for each category of personal net worth. If none, enter 0.) Personal Net Worth of Individuals Who Own and Control DBE Firms That Were Not Awarded Prime Contracts in FY 2000 $100,000 to $250,000 $250,001 to $500,000 $500,001 to $750,000 # of Firms: 60.2 (avg.) # of Firms: 35.6 (avg.) # of Firms: 26.0 (avg.) # of Firms: 12.9 (avg.) (N=14) (N=14) (N=15) (N=14) 10.5% Information is not collected. 16.4% Information is being collected, but is not yet available. 10.5% Our agency relies on the certification of other jurisdictions. 7.5% Information is verified during certification and recertification, but it is not retained. 64.2% Information is not maintained in an electronic database, and would be difficult and/or time-consuming to report. 6.0% Other. Please explain: (78%) provided information for Part 14b. 14a. What was the personal net worth of individuals who own and control DBE firms that were not awarded subcontracts through your agency in FY 2000? Use the DBE firms’ most recent certification or recertification to determine personal net worth. We are aware that you may not have personal net worth information for all DBE firms; however, please provide the information that is available. If you cannot provide any personal net worth information, please answer Question 14b. (Enter the number of firms for each category of personal net worth. If none, enter 0.) Personal Net Worth of Individuals Who Own and Control DBE Firms That Were Not Awarded Subcontracts in FY 2000 $100,000 to $250,000 $250,001 to $500,000 $500,001 to $750,000 # of Firms: 46.4 (avg.) # of Firms: 34.2 (avg.) # of Firms: 21.8 (avg.) # of Firms: 13.8 (avg.) (N=18) (N=17) (N=17) (N=17) 12.3% Information is not collected. 13.9% Information is being collected, but is not yet available. 10.8% Our agency relies on the certification of other jurisdictions. 7.7% Information is verified during certification and recertification, but it is not retained. 64.6% Information is not maintained in an electronic database, and would be difficult and/or time-consuming to report. 6.2% Other. Please explain: 15.How many DBE firms became ineligible for the DBE program in FY 1999 and FY 2000 because they exceeded the program’s statutory cap on annual gross receipts ($16.6 million)? (Enter number. If none, enter 0.) N=83 FY 1999: 0.5 (avg.) (N=60) † Check here if information is not available. FY 2000: 0.3 (avg.) (N=60) † Check here if information is not available. 16.How many DBE firms became ineligible for the DBE program in FY 1999 and FY 2000 because they exceeded applicable SBA small business size standards? (Enter number. If none, enter 0.) N=83 FY 1999: 1.1 (avg.) (N=62) † Check here if information is not available. FY 2000: 1.7 (avg.) (N=63) † Check here if information is not available. 17.How many DBE firms became ineligible for the DBE program in FY 2000 because individuals who own or control the firm exceeded the program’s cap on personal net worth ($750,000)? (Enter number. If none, enter 0.) N=83 FY 2000: 6.1 (avg.) (N=65) † Check here if information is not available. 18.Please estimate the cost of administering the DBE program in your agency? (In your estimate include such things as salaries, certification costs, technical assistance, database development and maintenance, and contracted studies/analyses.) N=83 FY 1999: $633,124 (avg.) (N=55) † Check here if information is not available. FY 2000: $772,160 (avg.) (N=57) † Check here if information is not available. 19.Please indicate the number of non-DBE firms that were awarded prime contracts through your agency, the number of prime contracts that were awarded to these firms, and the total value of these contracts for FY 1999 and FY 2000. (Enter numbers and dollar amounts. If none, enter 0. If this information is not available, put a check in the appropriate box on the right side of the table below.) # of Firms: 202.6 (avg.) # of Contracts: 474.8 (avg.) $ 313,477,141 (avg.) (N=58) (N=67) (N=69) † Check here if FY 1999 information is not available. # of Firms: 185.2 (avg.) # of Contracts: 454.2 (avg.) $ 308,620,950 (avg.) † Check here if FY 2000 information is not available. (N=59) (N=68) (N=69) # of Firms: 180.1 (avg.) # of Subcontracts: 597.1 (avg.) $ 92,747,776 (avg.) † Check here if FY 1999 information is not available (N=28) (N=27) (N=27) # of Firms: 190.3 (avg.) # of Subcontracts: 604.6 (avg.) $ 96,320,366 (avg.) † Check here if FY 2000 information is not available (N=29) (N=28) (N=28) provided information for Part 21b. Two agencies gave estimates for Part 21a. 21a. What were the annual gross receipts of the non-DBE firms that were awarded prime and/or subcontracts through your agency in FY 1999? If you are not able to provide this information, please answer Question 21b. (Enter the number of firms for each category of annual gross receipts. If none, enter 0.) # of Firms: 21.3 (avg.) # of Firms: 36.0 (avg.) # of Firms: 11.7 (avg.) # of Firms: 11.7 (avg.) # of Firms: 17.7 (avg.) (N=3) (N=3) (N=3) (N=3) (N=3) the above information, please indicate the reason(s). (Check all that apply.) 32.5% Information is not collected, but will be in the future. 18.8% Information is not collected, and there are no plans to collect it in the future. 12.5% Information is being collected, but is not yet available. 31.3% Information is not maintained in an electronic database, and would be difficult and/or time-consuming to report. 11.3% Other. Please explain: 22a. What were the annual gross receipts of the non-DBE firms that were awarded prime and/or subcontracts through your agency in FY 2000. If you are not able to provide this information, please answer Question 22b. (Enter the number of firms for each category of annual gross receipts. If none, enter 0.) # of Firms: 16.8 (avg.) # of Firms: 20.6 (avg.) # of Firms: 9.0 (avg.) # of Firms: 7.8 (avg.) # of Firms: 10.4 (avg.) (N=5) (N=5) (N=5) (N=5) (N=5) 29.5% Information is not collected, but will be in the future. 20.5% Information is not collected, and there are no plans to collect it in the future. 16.7% Information is being collected, but is not yet available. 28.2% Information is not maintained in an electronic database, and would be difficult and/or time-consuming to report. 10.3% Other. Please explain: provided information for Part 23b. One agency did not answer question. 23a. What was the personal net worth of individuals who own and control non-DBE firms that were awarded prime and/or subcontracts through your agency in FY 2000. If you are not able to provide this information, please answer Question 23b. (Enter the number of firms for each category of personal net worth. If none, enter 0.) Personal Net Worth of Individuals Who Own and Control Non-DBE Firms That Were Awarded Prime Contracts and/or Subcontracts in FY 2000 $100,000 to $250,000 $250,001 to $500,000 $500,001 to $750,000 # of Firms: (No respondents provided information) # of Firms: (No respondents provided information) # of Firms: (No respondents provided information) # of Firms: (No respondents provided information) # of Firms: (No respondents provided information) 8.5% Information is not collected, but will be in the future. 62.2% Information is not collected, and there are no plans to collect it in the future. 4.9% Information is being collected, but is not yet available. 19.5% Information is not maintained in an electronic database, and would be difficult and/or time-consuming to report. 11.0% Other. Please explain: 24.Has the federal DBE program 29.Of the formal written discrimination administered by your agency been the subject of litigation? (Please check one.) N=83 71% No Please go to Question 27. complaints filed by DBE firms in FY 1999 and FY 2000, how many were investigated by your agency? (Enter the number of complaints. If none, enter 0.) N=16 Number of complaints investigated in FY 1999: 15 (sum) (N=16) ‡ Check here if information is not available. Number of complaints investigated in FY 2000: 14 (sum) (N=15) ‡ Check here if information is not available. FY 1999:$ 7,166.7 (avg.) (N=15) FY 2000: $ 19,897.3 (avg.) (N=13) ‡ Check here if information is not available. FY 1999: 2 (sum) (N=14) Check here if information is not available. FY 2000: 2 (sum) (N=12) ‡ Check here if information is not available. complaints filed by DBE firms with your agency in FY 1999 and FY 2000? (Please check one.) N=83 81% No Please go to Question 31. 28.How many complaints were filed? (Enter the number of complaints. If none, enter 0.) N=16 FY 1999: 15 (sum) (N=16) ‡ Check here if information is not available. FY 2000: 16 (sum) (N=15) ‡ Check here if information is not available. 31.Have you conducted, or are you conducting, any studies or analyses to determine if awarding prime contracts to DBE firms affects contract costs? (Please check one.) N=83  No 1.2% Yes Please briefly describe: 32.Have you conducted, or are you conducting, any studies or analyses to determine if awarding subcontracts to DBE firms affects contract costs? (Please check one.) N=83 98.8% No 1.2% Yes Please briefly describe: 33.Have you conducted, or are you conducting, any studies or analyses of discrimination against DBE firms on the basis of race, color, national origin, or sex? (Please check one.) N=83 67.5% No 32.5% Yes Please briefly describe: 34.Have you conducted, or are you conducting, any studies or analyses of discrimination on the basis of race, color, national origin, or sex against DBE construction firms by the financial, credit, insurance, or bond markets and/or in other contracts? (Please check one.) N=82 84.2% No 15.9% Yes Please briefly describe: 35.Have you conducted, or are you conducting, any studies or analyses of other factors that limit the ability of DBE firms to compete for prime and/or subcontracts? (Please check one.) N=83 79.5% No 20.5% Yes Please briefly describe: 36.Have you conducted, or are you conducting, any studies or analyses on the impact of the DBE program on competition and the creation of jobs? (Please check one.) N=83 92.8% No 7.2% Yes Please briefly describe: 37.In addition to the federal DBE program, is your agency subject to the requirements of a non- federal minority business enterprise (MBE), women-owned business enterprise (WBE), or another DBE program? (Please check one.) N=83 65.1% No 34.9% Yes 38. Has your agency participated in a non-federal MBE, WBE, or DBE program that has been suspended, repealed, or otherwise terminated? (Please check one.) N=83 85.5% No Please go to Question 40. 39.Please indicate the year(s) that program(s) were repealed and the type of program(s) repealed. 25.3% The benefits somewhat outweigh the been approved by DOT? (Please check one.) N=83 32.9% The benefits greatly outweigh the costs 22.8% No basis to judge 90% Yes When was your program approved? technical assistance you have received from FTA on implementing the revised DBE regulations? (Please check one.) N=83 49.Has your uniform certification program (UCP) been approved by the U.S. DOT? (Please check one.) N=7 71.4% Yes When was your plan approved? 91.5% No Please go to Question 50. 8.5% Yes When was your plan submitted? to track and monitor the information identified in the revised DBE regulation? (Please check one.) N=81 43.2% Yes When was your system implemented? †††† Check here to list this person as your program’s contact. †††† Check here to list this person as your program’s contact. Office/Department: Phone Number: Email Address: †††† Check here to list this person as your program’s contact. Office/Department: Phone Number: Email Address: Thank you very much for taking time to complete this questionnaire. If you would like to make additional comments concerning any of the questions or comment on topics not covered, please feel free to use this page or to attach additional pages. The Transportation Equity Act for the 21st Century directed us to evaluate the impact of the U.S. Department of Transportation’s (USDOT) Disadvantaged Business Enterprise (DBE) program throughout the nation and address 11 specific objectives. We grouped the statute’s 11 objectives into the following 4 researchable questions: 1. How has the DBE program changed since 1999? 2. What are the characteristics of DBEs and non-DBEs that receive USDOT-assisted highway and transit contracts? 3. What do selected sources indicate about discrimination or other factors that may limit DBEs’ ability to compete for USDOT-assisted contracts? 4. What is the impact of the DBE program on costs, competition, and job creation as well as the impact of discontinuing federal and nonfederal DBE programs? To determine how the DBE program has changed since 1999 and to identify the characteristics of DBEs and non-DBEs that receive USDOT-assisted contracts, we reviewed USDOT’s regulations and guidance pertaining to the DBE program. We also interviewed USDOT officials and representatives from minority-owned business and transportation associations. In addition, we surveyed the departments of transportation of the 50 states, the District of Columbia, and Puerto Rico, and 36 transit authorities throughout the nation. (We planned to survey all transit authorities required to submit plans for a DBE program. However, the Federal Transit Administration could not provide an accurate list of these transit authorities.) The 36 transit authorities we surveyed are the largest transit authorities in the nation as defined by the number of unlinked passenger trips in 1999. They also received about two-thirds of all federal transit grant funds obligated in 1999. Our survey was designed to obtain information on the issues that TEA-21 directed us to examine, including the participation rates of DBEs in USDOT-assisted contracts, the annual gross receipts of DBEs and non-DBEs, and the cost of administering the DBE program. To help design our survey, we obtained input from USDOT, states, and transit authorities. After we developed our survey, we pre-tested the questionnaire with officials of 4 state departments of transportation (states) and 5 transit authorities. We selected states and transit authorities from a variety of geographical regions for our pre-tests. For each pre-test, members of our staff met with officials from the state or transit authority and simulated the actual survey experience by asking the officials to fill out the questionnaire. We also interviewed the officials after they had completed the questionnaire to ensure that (1) the questions were understandable and clear, (2) the terms used were precise, (3) the questionnaire did not place undue burden on state or transit authority officials, and (4) the questionnaire was unbiased. Appropriate changes were incorporated in the final survey based on our pre-testing. In addition, we provided a draft copy of our questionnaire to USDOT officials and incorporated comments from them, as appropriate. To increase the response rate of our survey, we sent two additional reminders after the survey was mailed in October 2000, including (1) a postcard sent one week after the survey and (2) a follow-up letter and replacement survey to nonrespondents sent about 3 weeks after the initial mailing. In addition, we conducted follow-up phone calls to nonrespondents through January 2001. We received survey responses from all 52 states and 31 transit authorities for a response rate of 94 percent. To evaluate the existence of discrimination against DBEs, we reviewed recent court cases that have addressed the constitutionality of the federal DBE program, transportation-specific disparity studies, and written discrimination complaints filed by DBEs with USDOT, states and transit authorities. Specifically: We reviewed the court decisions that have addressed the constitutionality of the federal DBE program since the Supreme Court’s 1995 decision in Adarand Constructors, Inc. v. Pena. We identified decisions meeting these criteria and consulted with officials from USDOT and the Department of Justice (DOJ) to ensure that we included all relevant decisions in our review. We also obtained information from USDOT and DOJ about pending cases concerning the constitutionality of the federal DBE program. We identified and reviewed all (14) transportation-specific disparity studies published between 1996 and 2000. We reviewed disparity studies because DOJ has stated that they are of particular relevance for affirmative action measures in federal programs providing grants to states and local governments, and because courts have recognized them as a source of evidence of discrimination in considering the federal DBE program. In addition, USDOT has identified disparity studies as one source that states and transit authorities could use to help set their federal DBE participation goals. Numerous state and local governments have used them to support their minority business contracting programs and to set their federal DBE goals. We selected disparity studies that (1) were published between 1996 and 2000, (2) contained a separate disparity analysis on transportation contracting, and (3) used a disparity ratio—that is, a comparison of the availability of MBE/WBEs to their utilization in contracts—as a indicator of discrimination. These criteria are generally consistent with USDOT’s regulations, which state that any disparity studies used in the DBE goal setting process should be as recent as possible and focused on the transportation contracting industry. To determine whether the disparity studies’ findings were reliable, we evaluated the methodological soundness of the studies using common social science and statistical practices. For example, we systematically examined each study’s methodology, including its assumptions and limitations, data sources, analyses, and conclusions. To identify relevant disparity studies, we obtained information from USDOT, DOJ, the Policy Sciences Graduate Program of the University of Maryland Baltimore City, the Minority Business Enterprise Legal Defense and Education Fund, Inc. (MBELDEF). In addition, we obtained information from the five consulting firms most noted for conducting disparity studies: National Economic Research Associates, Inc., BBC Research and Consulting, MGT of America, Mason-Tillman Associates, Ltd., and DJ Miller and Associates, Inc. The evidence—along with its strengths and weaknesses—contained in any disparity study would be limited to the geographical scope of that particular study. Moreover, because we limited our review to transportation-specific disparity studies, our conclusions cannot be generalized to disparity studies pertaining to other industries. We interviewed USDOT officials about written complaints of discrimination DBEs filed with USDOT. We also reviewed USDOT’s data on written complaints of discrimination filed by DBEs since fiscal year 1996. In addition, we analyzed information on written complaints of discrimination filed by states and transit authorities collected through our nationwide survey. We recognize that we did not review all of the information that could be relevant to the issue of discrimination in transportation contracting. However, we chose to review sources directly related to transportation contracting and the federal DBE program, including those suggested by USDOT and minority-owned business and transportation associations. Since we did not conduct an exhaustive review and evaluation of all evidence of discrimination, our results cannot be used to support or dismiss claims about the existence of discrimination against DBEs throughout the nation. Moreover, we did not address whether the DBE program satisfies the requirements of strict scrutiny and is therefore constitutional. To identify factors, other than discrimination that may limit the ability of DBEs to compete for transportation contracts, we reviewed information collected in our nationwide survey and recent GAO reports. In addition, we interviewed officials from USDOT and the Small Business Administration (SBA) and representatives from the American Road and Transportation Builders Association, Associated General Contractors of America, Minority Business Enterprise Legal Defense and Education Fund, Inc., Women Construction Owners and Executives, and National Black Chamber of Commerce. To determine the impact of the DBE program on costs, competition, and job creation, we collected data from states and transit authorities through our survey and from USDOT. In addition, we interviewed officials from USDOT and SBA as well as representatives from minority- and women- owned business groups and transportation associations. To evaluate the impact of discontinuing a federal DBE program, we identified the states and transit authorities that had discontinued the federal DBE program through our review of the court decisions that have addressed the constitutionality of the federal DBE program since the Supreme Court’s 1995 decision in Adarand Constructors, Inc. v. Pena. We identified 1 state and 1 transit authority that had discontinued their federal DBE programs due to court decisions. We interviewed officials from the state and transit authority and requested DBE and non-DBE participation data in federal transportation contracting for the years immediately before and after the discontinuance. Only the state DOT provided the requested data. To assess the impact of discontinuing a nonfederal DBE program, we used our survey to identify states and transit authorities that had participated in a nonfederal DBE program that was discontinued. Twelve survey respondents indicated that they had participated in such programs. We excluded the two transit authorities that had participated in nonfederal DBE programs that were discontinued in 2000 because sufficient time had not elapsed to determine the impact of this change. We contacted the remaining ten states and transit authorities and requested data on DBEs’ and non-DBEs’ participation in nonfederal and federal transportation contracting for the years immediately before and after the program was discontinued. Eight of the 10 states and transit authorities responded to our requests for data; however, only one state could provide the data necessary to thoroughly evaluate the impact of discontinuing its program— that is, data on DBEs’ and non-DBEs’ participation in nonfederal transportation contracting before and after the nonfederal program was discontinued. We conducted our review from August 2000 through April 2001 in accordance with generally accepted government auditing standards. The following are GAO’s comments on USDOT’s letter dated May 1, 2001. 1. As noted on pages 24 and 74, our objective was not to address the question of whether the DBE program satisfies the requirements of strict scrutiny and is therefore constitutional as USDOT seems to suggest. In particular, we did not attempt to determine whether sufficient evidence of discrimination exists to demonstrate that the DBE program serves a compelling interest. Further, as stated on pages 11, 14, and 24, we recognize that disparity studies are not required to support the federal DBE program, represent one of several sources of evidence of discrimination, and are but one method that states and transit authorities could use to set their federal DBE goals. 2. We agree with USDOT’s assertion that an inference of discrimination can be drawn from studies finding statistical disparities between the availability and utilization of MBE/WBEs. Consequently, we chose to review disparity studies as one source of evidence of discrimination. Also, as we stated on pages 6 and 29, all 14 studies we reviewed found disparities between the availability and utilization of MBE/WBEs in contracting, and taken as a whole, these studies suggest discrimination against MBE/WBEs. However, the data limitations and methodological weaknesses we identified create uncertainties about their findings. Furthermore, we agree with USDOT that we did not review all sources of evidence of discrimination against DBEs—a point we make repeatedly throughout the report. While we could not review all possible sources, we chose to review the sources directly pertaining to transportation contracting and the federal DBE program. As such, one of the sources we reviewed were transportation-specific disparity studies published between 1996 and 2000. As noted on page 29, we defined transportation-specific studies as those containing a separate disparity analysis on transportation contracting. While the Urban Institute report cited by USDOT included several studies focusing on transportation contracting, it combined these studies with a variety of others in its analysis and did not contain a separate disparity analysis of transportation contracting. In addition, although the Urban Institute published its report in 1997, all of the disparity studies it examined had been published before 1996. Therefore, the Urban Institute report did not meet our criteria. We did not discuss all of the details about the methods we used to analyze the 14 disparity studies because the methods are commonly used in social science research. To help clarify this for readers who are unfamiliar with these methods, we have added an example to our discussion in appendix III. 3. We agree with USDOT that states and transit authorities must use the best available data in setting their DBE goals and that there are inherent limitations in conducting disparity studies. However, we disagree that we are seeking an unobtainable level of sophistication and detail in these endeavors. Rather, we believe we identified some basic problems with the data sources that should be recognized and, in most cases, could reasonably be avoided in conducting disparity studies and setting DBE goals. For example, if bidders lists are used to set DBE goals, they should be as up-to-date as possible in order to avoid overstating or understating the number of available firms. 4. We disagree that the information necessary to calculate DBE participation rates in subcontracts is routinely made available to DOT. To calculate DBE participation rates in prime contracts and subcontracts, one needs the number and value of prime contracts and subcontracts awarded to DBEs and the number and value of prime contracts and subcontracts awarded to non-DBEs. We were able to calculate DBE participation rates in prime contracts because most states and transit authorities could provide the number and value of prime contracts awarded to DBEs and non-DBEs. However, the majority of states and transit authorities could not provide the number and value of subcontracts awarded to non-DBEs and therefore the data on DBEs’ participation rates in subcontracts are limited. Information on the number and value of subcontracts awarded to non- DBEs is not reported to USDOT and USDOT does not maintain this information. Most states and transit authorities provided the number and value of subcontracts awarded to DBEs—information that is routinely provided to USDOT. However, this information alone does not allow one to calculate DBEs’ participation rates in subcontracts. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
The Department of Transportation's (DOT) Disadvantaged Business Enterprise (DBE) program seeks to remedy the effects of current and past discrimination against small businesses owned and controlled by socially and economically disadvantaged persons and to foster equal opportunity in transportation contracting. This report provides information on (1) important changes made to the program since 1999; (2) characteristics of DBEs and non-DBEs that receive DOT-assisted highway and transit contracts; (3) evidence of discrimination and other factors that may limit DBEs' ability to compete for DOT-assisted contracts; and (4) the programs impact on costs, competition, and job creation and the impact of discontinuing the federal and nonfederal DBE programs. GAO found that the program has changed significantly since DOT issued new regulations in 1999 in response to a 1995 Supreme Court decision that heightened standards for federal programs that use race or ethnicity as a criterion in decision-making. The new regulations overhauled the DBE goal-setting process. For example, states and transit authorities are no longer required to justify goals lower than 10 percent--the amount identified in the statutory DBE provision. Rather, goals are to be based on the number of "ready, willing, and able" DBEs in local markets. GAO was unable to determine the characteristics of DBE participants because of a lack of information. Without this information, it is impossible to define the universe of DBEs, compare them with the transportation contracting community as a whole, or gain a clear understanding of the programs impact. DOT does not systematically track information on discrimination complaints filed by DBEs. Although DOT receives written discrimination complaints filed by DBEs, it could not provide the total number of such complaints, the total number of investigations launched, or the outcomes of the investigations.
Pension plans can generally be characterized as either defined benefit or defined contribution plans. In a defined benefit plan, the amount of the benefit payment is determined by a formula typically based on the retiree’s years of service and final average salary, and is most often provided as a lifetime annuity. For state and local government retirees, postretirement cost-of-living adjustments (COLAs) are frequently provided in defined benefit plans. But benefit payments are generally reduced for early retirement, and in some cases payments may be offset for receipt of Social Security. In a defined contribution plan, the key determinants of the benefit amount are the employee’s and employer’s contribution rates, and the rate of return achieved on the amounts contributed to an individual’s account over time. The employee assumes the investment risk; the account balance at the time of retirement is the total amount of funds available, and unlike with defined benefit plans, there are generally no COLAs. Until depleted, however, a defined contribution account balance may continue to earn investment returns after retirement, and a retiree could use the balance to purchase an inflation-protected annuity. Also, defined contribution plans are more portable than defined benefit plans, as employees own their accounts individually and can generally take their balances with them when they leave government employment. There are no reductions based on early retirement or for participation in Social Security. Both government employers and employees generally make contributions to fund state and local pension benefits. For plans in which employees are covered by Social Security, the median contribution rate in fiscal year 2006 was 8.5 percent of payroll for employers and 5 percent of pay for employees, in addition to 6.2 percent of payroll from both employers and employees to Social Security. For plans in which employees are not covered by Social Security, the median contribution rate was 11.5 percent of payroll for employers and 8 percent of pay for employees. Actuaries estimate the amount that will be needed to pay future benefits. The benefits that are attributable to past service are called “actuarial accrued liabilities.” (In this report, the actuarial accrued liabilities are referred to as “liabilities.” Actuaries calculate liabilities based on an actuarial cost method and a number of assumptions including discount rates and worker and retiree mortality. Actuaries also estimate the “actuarial value of assets” that fund a plan. (In this report, the actuarial value of assets is referred to simply as “assets”). The excess of actuarial accrued liabilities over the actuarial value of assets is referred to as the “unfunded actuarial accrued liability” or “unfunded liability.” Under accounting standards, such information is disclosed in financial statements. In contrast, the liability that is recognized on the balance sheet is the cumulative excess of annual benefit costs over contributions to the plan. Certain amounts included in the actuarial accrued liability are not yet recognized as annual benefit costs under accounting standards, as they are amortized over several years. State and local government pension plans are not covered by most of the substantive requirements, or the insurance program operated by the Pension Benefit Guaranty Corporation (PBGC), under the Employee Retirement Income Security Act of 1974 (ERISA), which apply to most private employer benefit plans. Federal law generally does not require state and local governments to prefund or report on the funded status of pension plans. However, in order to receive preferential tax treatment, state and local pensions must comply with requirements of the Internal Revenue Code. In addition, the retirement income security of Americans is an ongoing concern of the federal government. Although ERISA imposes participation, vesting, and other requirements directly upon employee pension plans offered by private sector employers, governmental plans such as those provided by state and local governments to their employees are excepted from these requirements. In addition, ERISA established an insurance program for defined benefit plans under which promised benefits are paid (up to a statutorily set amount) if an employer cannot pay them—but this too does not apply to governmental plans. However, for participants in governmental pension plans to receive preferential tax treatment (that is, for plan contributions and investment earnings to be tax-deferred), plans must be deemed “qualified” by the Internal Revenue Service. Since the 1980s, the Governmental Accounting Standards Board (GASB) has maintained standards for accounting and financial reporting for state and local governments. GASB operates independently and has no authority to enforce the use of its standards. Still, many state laws require local governments to follow GASB standards, and bond raters do consider whether GASB standards are followed. Also, to receive a “clean” audit opinion under generally accepted accounting principles, state and local governments are required to follow GASB standards. These standards require disclosing financial information on pensions, such as the amount of contributions and the ratio of assets to liabilities. Three measures are key to understanding pension plans’ funded status: contributions, funded ratios, and unfunded liabilities. According to experts we interviewed, any single measure at a point in time may give a dimension of a plan’s funded status, but it does not give a complete picture. Instead, the measures should be reviewed collectively over time to understand how the funded status is improving or worsening. For example, a strong funded status means that, over time, the amount of assets, along with future schedule contributions, comes close to matching a plan’s liabilities. Under GASB reporting standards, the funded status of different pension plans cannot be compared easily because governments use different actuarial approaches such as different actuarial cost methods, assumptions, amortization periods, and “smoothing” mechanisms. Most public pension plans use one of three “actuarial cost methods,” out of the six GASB approves. Actuarial costs methods differ in several ways. First, each uses a different approach to calculate the “normal cost,” the portion of future benefits that the cost method allocates to a specific year, resulting in different funding patterns for each. In addition to the cost methods, differences in assumptions used to calculate the funded status can result in significant differences among plans that make comparison difficult. Also differences in amortization periods make it difficult to compare the funded status of different plans. Finally, actuaries for many plans calculate the value of current assets based on an average value of past years. As a result, if the value of assets fluctuates significantly from year to year, the “smoothed” value of assets changes less dramatically. Comparing the funded status of plans that use different smoothing periods can be confusing because the value of the different plans’ assets reflects a different number of years. We reported recently that state and local governments will likely face daunting fiscal challenges, driven in large part by the growth in health- related costs, such as Medicaid and health insurance for state and local employees. Our report was based on simulations for the state and local government sector that indicated that in the absence of policy changes, large and growing fiscal challenges will likely emerge within a decade. We found that, as is true for the federal sector, the growth in health-related costs is a primary driver of these fiscal challenges. State and local governments typically provide their employees with retirement benefits that include a defined benefit plan and a supplemental defined contribution plan for voluntary savings. However, the way each of these components is structured and the level of benefits provided varies widely--both across states, and within states based on such things as date of hire, employee occupation, and local jurisdiction. Statutes and local ordinances protect and manage pension plans and are often anchored by provisions in state constitutions and local charters. State and local law also typically requires that pensions be managed as trust funds and overseen by boards. Most state and local government workers are provided traditional pension plans with defined benefits. About 90 percent of full-time state and local employees participated in defined benefit plans as of 1998. In fiscal year 2006, state and local government pension systems covered 18.4 million members and made periodic payments to 7.3 million beneficiaries, paying out $151.7 billion in benefits. State and local government employees are generally required to contribute a percentage of their salaries to their defined benefit plans, unlike private sector employees, who generally make no contribution when they participate in defined benefit plans. According to a 50-state survey conducted by Workplace Economics, Inc., 43 of 48 states with defined benefit plans reported that general state employees were required to make contributions ranging from 1.25 to 10.5 percent of their salaries. Nevertheless, these contributions have no influence on the amount of benefits paid because benefits are based solely on the formula. In 1998, all states had defined benefit plans as their primary pension plans for their general state workers except for Michigan and Nebraska (and the District of Columbia), which had defined contribution plans as their primary plans, and Indiana, which combined both defined benefit and defined contribution components in its primary plan. Almost a decade later, we found that as of 2007, only one additional state (Alaska) had adopted a defined contribution plan as its primary plan; one additional state (Oregon) had adopted a combined plan, and Nebraska had replaced its defined contribution plan with a cash balance defined benefit plan. (See fig. 1.) Although still providing defined benefit plans as their primary plans for general state employees, some states also offer defined contribution plans (or hybrid defined benefit/defined contribution plans) as optional alternatives to their primary plans. These states include Colorado, Florida, Montana, Ohio, South Carolina, and Washington. In states that have adopted defined contribution plans as their primary plans, most employees continue to participate in defined benefit plans because employees are allowed to continue their participation in their previous plans (which is rare in the private sector). Thus, in contrast to the private sector, which has moved increasingly away from defined benefit plans over the past several decades, the overwhelming majority of states continue to provide defined benefit plans for their general state employees. Most states have multiple pension plans providing benefits to different groups of state and local government workers based on occupation (such as police officer or teacher) and/or local jurisdiction. According to the most recent Census data available, in fiscal year 2004-2005 there were a total of 2,656 state and local government pension plans. We found that defined benefit plans were still prevalent for most of these other state and local employees as well. For example, a nationwide study conducted by the National Education Association in 2006 found that of 99 large pension plans serving teachers and other school employees, 79 were defined benefit plans, 3 were defined contribution plans, and the remainder offered a range of alternative, optional, or combined plan designs with both defined benefit and defined contribution features. In addition to primary pension plans (whether defined benefit or defined contribution), data we gathered from various national organizations show that each of the 50 states has also established a defined contribution plan as a supplementary, voluntary option for tax-deferred retirement savings for their general state employees. Such plans appear to be common among other employee groups as well. These supplementary defined contribution plans are typically voluntary deferred compensation plans under section 457(b) of the federal tax code. While these defined contribution plans are fairly universally available, state and local worker participation in the plans has been modest. In a 2006 nationwide survey conducted by the National Association of Government Defined Contribution Administrators, the average participation rate for all defined contribution plans was 21.6 percent. One reason cited for low participation rates in these supplementary plans is that, unlike in the private sector, it has been relatively rare for employers to match workers’ contributions to these plans, but the number of states offering a match has been increasing. According to a state employee benefit survey of all 50 states conducted by Workplace Economics, Inc., in 2006 12 states matched the employee’s contribution up to a specified percent or dollar amount. Among our site visit states, none made contributions to the supplementary savings plans for their general state employees, and employee participation rates generally ranged between 20 to 50 percent. In San Francisco, however, despite the lack of an employer match, 75 percent of employees had established 457(b) accounts. The executive director of the city’s retirement system attributed this success to several factors, including (1) that the plan had been in place for over 25 years, (2) that the plan offers good investment options for employees to choose from, and (3) that plan administrators have a strong outreach program. In the private sector, a growing number of employers are attempting to increase participation rates and retirement savings in defined contribution plans by automatically enrolling workers and offering new types of investment funds. State and local laws generally provide the most direct source of any specific legal protections for the pensions of state and local workers. Provisions in state constitutions often protect pensions from being eliminated or diminished. In addition, constitutional provisions often specify how pension funds are to be managed, such as by mandating certain funding requirements and/or requiring that the funds be overseen by boards of trustees. Moreover, we found that at the sites we visited, locally administered plans were generally governed by local laws. However, state employees, as well as the vast majority of local employees, are covered by state-administered plans. Protections for pensions in state constitutions are the strongest form of legal protection states can provide because constitutions—which set out the system of fundamental laws for the governance of each state— preempt state statutes and are difficult to change. Furthermore, changing a state constitution usually requires broad public support. For example, often a supermajority (such as three-fifths) of a state’s legislature may need to first approve proposed constitutional changes and typically if a change passes the legislature, voters must also approve it. The majority of states have some form of constitutional protection for their pensions. According to AARP data compiled in 2000, 31 states have a total of 93 constitutional provisions explicitly protecting pensions. (The other 19 states all have pension protections in their statutes or recognize legal protections under common law.) These constitutional pension provisions prescribe some combination of how pension trusts are to be funded, protected, managed, or governed. (See table 1.) In nine states, constitutional provisions take the form of a specific guarantee of the right to a benefit. In two of the states we visited, the state constitution provided protection for pension benefits. In California, for example, the state constitution provides that public plan assets are trust funds to be used only for providing pension benefits to plan participants. In Michigan, the state constitution provides that public pension benefits are contractual obligations that cannot be diminished or impaired and must be funded annually. The basic features of pension plans—such as eligibility, contributions, and types of benefits—are often spelled out in state or local statute. State- administered plans are generally governed by state laws. For example, in California, the formulas used to calculate pension benefit levels for employees participating in the California Public Employees’ Retirement System (CalPERS) are provided in state law. Similarly, in Oregon, pension benefit formulas for state and local employees participating in the Oregon Public Employees Retirement System (OPERS) plans are provided in state statute. In addition, we found that at the sites we visited locally administered plans were generally governed by local laws. For example, in San Francisco, contribution rates for employees participating in the San Francisco City and County Employees’ Retirement System are spelled out in the city charter. Legal protections usually apply to benefits for existing workers or benefits that have already accrued; thus, state and local governments generally can change the benefits for new hires by creating a series of new tiers or plans that apply to employees hired only after the date of the change. For example, the Oregon legislature changed the pension benefit for employees hired on or after January 1, 1996, and again for employees hired on or after August 29, 2003, each time increasing the retirement age for the new group of employees. For some state and local workers whose benefit provisions are not laid out in detail in state or local statutes, specific provisions are left to be negotiated between employers and unions. For example, in California, according to state officials, various benefit formula options for local employees are laid out in state statutes, but the specific provisions adopted are generally determined through collective bargaining between the more than 1,500 different local public employers and rank-and-file bargaining units. In all three states we visited, unions also lobby the state legislature on behalf of their members. For example, in Michigan, according to officials from the Department of Management and Budget, unions marshal support for or against a proposal by taking such actions as initiating letter-writing campaigns to support or oppose legislative measures. In accordance with state constitution and/or statute, the assets of state and local government pension plans are typically managed as trusts and overseen by boards of trustees to ensure that the assets are used for the sole purpose of meeting retirement system obligations and that the plans are in compliance with the federal tax code. Boards of trustees, of varying size and composition, often serve the purpose of establishing the overall policies for the operation and management of the pension plans, which can include adopting actuarial assumptions, establishing procedures for financial control and reporting, and setting investment strategy. On the basis of our analysis of data from the National Education Association, the National Association of State Retirement Administrators (NASRA), and reports and publications from selected states, we found that 46 states had boards overseeing the administration of their pension plans for general state employees. These boards ranged in size from 5 to 19 members, with various combinations of those elected by plan members, those appointed by a state official, and those who serve automatically based on their office in state government (known as ex officio members). (See fig. 2.) Different types of members bring different perspectives to bear, and can help to balance competing demands on retirement system resources. For example, board members who are elected by active and retired members of the retirement system, or who are union members, generally help to ensure that members’ benefits are protected. Board members who are appointed sometimes are required to have some type of technical knowledge, such as investment expertise. Finally, ex officio board members generally represent the financial concerns of the state government. Some pension boards do not have each of these perspectives represented. For example, boards governing the primary public employee pension plans in all three states we visited had various compositions and responsibilities. (See table 2.) At the local level, in Detroit, Michigan, a majority of the board of Detroit’s General Retirement System is composed of members of the system. According to officials from the General Retirement System, this is thought to protect pension plan assets from being used for purposes other than providing benefits to members of the retirement system. Regarding responsibilities, the board administers the General Retirement System and, as specified in local city ordinances, is responsible for the system’s proper operation and investment strategy. Pension boards of trustees typically serve as pension plan fiduciaries, and as fiduciaries, they usually have significant independence in terms of how they manage the funds. Boards make policy decisions within the framework of the plan’s enabling statutes, which may include adopting actuarial assumptions, establishing procedures for financial control and reporting, and setting investment policy. In the course of managing pension trusts, boards generally obtain the services of independent advisors, actuaries, or investment professionals. Also, some states’ pension plans have investment boards in addition to, or instead of, general oversight boards. For example, three of the four states without general oversight boards have investment boards responsible for setting investment policy. While public employees may have a broad mandate to serve all citizens, board members generally have a fiduciary duty to act solely in the interests of plan participants and beneficiaries. One study of approximately 250 pension plans at the state and local level found that plans with boards overseeing them were associated with greater funding than those without boards. When state pension plans do not have a general oversight board, these responsibilities tend to be handled directly by legislators and/or senior executive officials. For example, in the state of Washington, the pension plan for general state employees is overseen by the Pension Funding Council—a six-member body whose membership, by statute, includes four state legislators. The council adopts changes to economic assumptions and contribution rates for state retirement systems by majority vote. In Florida, the Florida Retirement System is not overseen by a separate independent board; instead, the pension plan is the responsibility of the State Board of Administration, composed of the governor, the chief financial officer of the state, and the state attorney general. In New York, the state comptroller, an elected official, serves as sole trustee and administrative head of the New York State and Local Employees’ Retirement System. Currently, most state and local government pension plans have enough invested resources set aside to pay for the benefits they are scheduled to pay over the next several decades. Many experts consider a funded ratio of about 80 percent or better to be sound for state and local government pensions. While most plans’ funding may be sound, a few plans have persistently reported low funded ratios, which will eventually require the government employer to improve funding, for example, by reducing benefits or by increasing contributions. Even for many plans with lower funded ratios, benefits are generally not at risk in the near term because current assets and new contributions may be sufficient to pay benefits for several years. Still, many governments have often contributed less than the amount need to improve or maintain funded ratios. Low contributions raise concerns about the future funded status, and may shift costs to future generations. Most public pension plans report having sufficient assets to pay for retiree benefits over the next several decades. Many experts and officials to whom we spoke consider a funded ratio of 80 percent to be sufficient for public plans for a couple of reasons. First, it is unlikely that public entities will go out of business or cease operations as can happen with private sector employers, and state and local governments can spread the costs of unfunded liabilities over a period of up to 30 years under current GASB standards. In addition, several commented that it can be politically unwise for a plan to be overfunded; that is, to have a funded ratio over 100 percent. The contributions made to funds with “excess” assets can become a target for lawmakers with other priorities or for those wishing to increase retiree benefits. More than half of state and local governments’ plans reviewed by the Public Fund Survey (PFS) had a funded ratio of 80 percent or better in fiscal year 2006, but the percentage of plans with a funded ratio of 80 percent or better has decreased since 2000, as shown in figure 3. Our analysis of the PFS data on 65 self-reported state and local government pension plans showed that 38 (58 percent) had a funded ratio of 80 percent or more, while 27 (42 percent) had a funded ratio of less than 80 percent. In the early 2000s, according to one study, the funded ratio of 114 state and local government pension plans together reached about 100 percent; it has since declined. In fiscal year 2006, the aggregate funded ratio was about 86 percent. Some officials attribute the decline in funded ratios since the late 1990s to the decline of the stock market, which reduced the value of assets. This sharp decline would likely affect funded ratios for several years because most plans use smoothing techniques to average out the value of assets over several years. Our analysis of several factors affecting the funded ratio showed that changes in investment returns had the most significant impact on the funded ratio between 1988 and 2005, followed by changes in liabilities. Although most plans report being soundly funded in 2006, a few have been persistently underfunded, and some plans have seen funded ratio declines in recent years. We found that several plans in our data set had funded ratios below 80 percent in each of the years for which data is available. Of 70 plans in our data set, 6 had funded ratios below 80 percent for 9 years between 1994 and 2006. Two plans had funded ratios below 50 percent for the same time period. In addition, of the 27 plans that had funded ratios below 80 percent in 2006, 15 had lower funded ratios in 2006 than in 1994. The sponsors of these plans may be at risk in the future of increased budget pressures. By themselves, lower funded ratios and unfunded liabilities do not necessarily indicate that benefits for current plan members are at risk, according to experts we interviewed. Unfunded liabilities are generally not paid off in a single year, so it can be misleading to review total unfunded liabilities without knowing the length of the period over which the government plans to pay them off. Large unfunded liabilities may represent a fiscal challenge, particularly if the period to pay them off is short. But all unfunded liabilities shift the responsibility for paying for benefits accrued in past years to the future. Unfunded liabilities will eventually require the government employer to increase revenue, reduce benefits or other government spending, or do some combination of these. Revenue increase could include higher taxes, returns on investments, or employee contributions. Nevertheless, we found that unfunded liabilities do not necessarily imply that pension benefits are at risk in the near term. Current funds and new contributions may be sufficient to pay benefits for several years, even when funded rations are relatively low. A number of governments reported not contributing enough to keep up with yearly costs. Governments need to contribute the full annual required contribution (ARC) yearly to maintain the funded ratio of a fully funded plan or improve the funded ratio of a plan with unfunded liabilities. In fiscal year 2006, the sponsors of 46 percent of the 70 plans in our data set contributed less than 100 percent of the ARC, as shown in figure 4, including 39 percent that contributed less than 90 percent of the ARC. In fact, the percentage of governments contributing less than the full ARC has risen in recent years. This continues a trend in recent years of about half of governments making full contributions. In particular, some of the governments that did not contribute the full ARC in multiple years were sponsors of plans with lower funded ratios. In 2006, almost two-thirds of plans with funded ratios below 80 percent in 2006 did not contribute the full ARC in multiple years. Of the 32 plans that in 2006 had funded ratios below 80 percent, 20 did not contribute the full ARC in more than half of the 9 years for which data is available. In addition, 17 of these governments did not contribute more than 90 percent of the full ARC in more than half the years. State and local government pension representatives told us that governments may not contribute the full ARC each year for a number of reasons. First, when state and local governments are under fiscal pressure, they may have to make difficult choices about paying for competing interests. State and local governments will likely face increasing fiscal challenges in the next several years as the cost of health care continues to rise. In light of this stress, the ability of some governments to continue to pay the ARC may be questioned. Second, changes in the value of assets can affect governments’ expectations about how much they will have to contribute. Moreover, some plans have contribution rates that are fixed by constitution, statute, or practice and do not change in response to changes in the ARC. Even when the contribution rate is not fixed, the political process may take time to recognize and act on the need for increased contributions. Nonetheless, many states have been increasing their contribution rates in recent years, according to information compiled by the National Conference of State Legislatures. Third, some governments may not contribute the full ARC because they are not committed to prefunding their pension plans and instead have other priorities. When a government contributes less than the full ARC, the funded ratio can decline and unfunded liabilities can rise, if all other assumptions are met about the change in assets and liabilities. Increased unfunded liabilities will require larger contributions in the future to keep pace with the liabilities that accrue each year and to make up for liabilities that accrued in the past. As a result, costs are shifted from current to future generations. The funded status of state and local government pensions overall is reasonably sound, though recent deterioration underscores the importance of keeping up with contributions. Since the stock market downturn in the early 2000s, the funded ratios of some governments have declined. Although governments can gradually recover from these losses, the failure of some to consistently make the annual required contributions undermines that progress and is cause for concern. This is especially important as state and local governments face increasing fiscal pressure in the coming decades. The ability to maintain current levels of public sector retiree benefits will depend, in large part, on the nature and extent of the fiscal challenges these governments face in the years ahead. As state and local governments begin to comply with GASB accounting and reporting standards, information about the future costs of retiree health benefits will become more transparent. In light of the initial estimates of the cost of future retiree health benefits, state and local governments will likely have to find new strategies for dealing with their unfunded liabilities. Although public sector workers have thus far been relatively shielded from many of the changes that have occurred in private sector defined benefit commitments, these protections could undergo revision under the pressure of overall future fiscal commitments. We are continuing our work on state and local government retiree benefits. We have two engagements underway; the first study will examine the various approaches these governments are taking to address their retiree health care liabilities, while the second examines the ways state and local governments allocate the assets in their pension and retiree health care funds. We are pleased that this committee is interested in our work and look forward to working with you in the future. That concludes my testimony: I would be pleased to respond to any questions the committee has. For further information regarding this testimony, please contact Barbara D. Bovbjerg, Director, Education, Workforce, and Income Security Issues at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Tamara Cross (Assistant Director), Bill Keller (Assistant Director), Anna Bonelli, Margie Shields, Joe Applebaum, and Craig Winslow. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Millions of state and local government employees are promised pension benefits when they retire. Although these benefits are not subject, for the most part, to federal laws governing private sector benefits, there is a federal interest in ensuring that all American have a secure retirement, as reflected in the special tax treatment provided for private and public pension funds. Recently, new accounting standards have called for the reporting of liabilities for future retiree health benefits. It is unclear what actions state and local governments may take once the extent of these liabilities become clear but such anticipated fiscal and economic challenges have raised questions about the unfunded liabilities for state and local retiree benefits, including pension benefits. GAO was asked to report on (1) the current structure of state and local government pension plans and how pension benefits are protected and managed, and (2) the current funded status of state and local government pension plans. GAO spoke to a wide range of public experts and officials from various federal and nongovernmental entities, made several site visits and gathered detailed information about state benefits, and analyzed self-reported data on the funded status of state and local pension plans from the Public Fund Survey and Public Pension Coordinating Council. State and local entities typically provide pension plans with defined benefits and a supplemental defined contribution plan for voluntary savings. Most states still have traditional defined benefit plans as the primary retirement plans for their workers. However, a couple of states have adopted defined contribution and other plans as their primary plan. State and local entities typically offer tax-deferred supplemental voluntary plans to encourage workers to save. State statutes and local ordinances protect and manage pension benefit and often include explicit protections, such as provisions stating that pensions promised to public employees cannot be eliminated or diminished. In addition, state constitutions and/or statutes often require pension plans to be managed as trust funds and overseen by boards of trustees. Most state and local government pension plans have enough invested resources set aside to fund the benefits they are scheduled to pay over the next several decades. Many experts consider a funded ratio (actuarial value of assets divided by actuarial accrued liabilities) of about 80 percent or better to be sound for government pensions. We found that 58 percent of 65 large pension plans were funded to that level in 2006, a decrease since 2000 when about 90 percent of plans were so funded. Low funded ratios would eventually require the government employer to improve funding, for example, by reducing benefits or by increasing contributions. However, pension benefits are generally not at risk in the near term because current assets and new contributions may be sufficient to pay benefits for several years. Still, many governments have often contributed less than the amount needed to improve or maintain funded ratios. Low contributions raise concerns about the future funded status.
In July 1993, DOD changed a long-standing practice and permitted defense contractors to charge restructuring costs to transferred flexibly pricedcontracts, provided (1) the restructuring costs were allowable under the Federal Acquisition Regulation and (2) a DOD contracting officer determined the business combination would result in overall reduced costs to DOD or preserve a critical defense capability. Concerns over the payment of such costs led Congress to pass legislation requiring that certain conditions be met before DOD reimbursed defense contractors for restructuring-related expenses. The legislation required, in part, that a senior DOD official certify that projections of restructuring savings are based on audited cost data; DOD’s share of projected savings exceeds allowed costs; and the Secretary of Defense reports to Congress on DOD’s experience with defense contractor business combinations, including whether savings associated with each restructuring actually exceed restructuring costs. The Secretary of Defense is currently required by 10 U.S.C. 2325 to determine in writing that the savings will be at least twice the amount of allowed costs or that projected savings will exceed costs allowed and that the combination will result in the preservation of a critical capability. DOD’s process to comply with these provisions requires, in part, that (1) the contractor submit a restructuring proposal, including details on planned restructuring activities, their projected costs, and anticipated savings; (2) the Defense Contract Audit Agency (DCAA) audits the proposal; and (3) following the audit, a DOD contracting official recommends whether the proposal should be certified. Assuming a favorable recommendation, a senior DOD official issues a written certification stating that projected savings should exceed the projected costs. The certification enables the contractor to bill restructuring costs to DOD and, in turn, allows DOD to reimburse the contractor for DOD’s share of such costs. Through December 31, 1997, DOD issued nine certifications for restructuring proposals associated with six business combinations. DOD officials indicated that another six restructuring proposals are in various stages of review within DOD, and several significant business combinations may result in future restructuring proposals. This latter category includes Raytheon’s acquisition of the defense units of Texas Instruments and Hughes Electronics, respectively, and the merger of Boeing and McDonnell Douglas. For the seven business combinations we examined, certified restructuring costs totaled about $1.5 billion. At the time of our review, the businesses estimated they had spent about $1.2 billion (see table 1). Restructuring costs are allocated to all of a contractor’s customers; consequently, DOD’s portion of these costs depends on its share of the contractor’s total business base. Based on estimates made at the time of certification, DOD projected it would pay about 56 percent of the restructuring costs. Restructuring after a business combination includes a wide range of activities, such as the disposal and modification of facilities, consolidation of operations and systems, relocation of workers and equipment, and workforce reductions. We grouped the estimated amount of restructuring costs incurred by the seven business combinations into broad categories (see table 2). Of the $1.2 billion in estimated restructuring costs, disposal and relocation of facilities and equipment was the largest cost category. The seven business combinations included in our review projected that about 21,000 workers or positions would be eliminated as a result of restructuring activities. The business combinations also reported that, at the time of our review, about 18,000 workers or positions had actually been eliminated (see table 3). While the job losses attributed to restructuring are significant, the losses reflect the overall downsizing in defense-related employment. DOD estimates that defense-related industry employment will decrease from about 2.7 million workers in 1993 to about 2.1 million workers by the end of 1998. The seven business combinations estimated they spent about $115.4 million—or about 10 percent—of the total restructuring costs for benefits and services associated with workforce reductions. The majority of these costs were for severance pay, with less amounts for temporary health benefits and outplacement services. Outplacement included such services as career transition workshops, resume development, career counseling services, job listings, and information on state and federal programs. The costs for worker benefits and services varied by business combination, ranging from 3 percent to 14 percent of the combination’s total restructuring costs. A key determinant in whether laid-off workers received severance payments was whether the company provided such benefits prior to the business combination. For example, General Dynamics, Northrop, and the Vought corporations did not provide severance benefits to their workers prior to the combination; consequently, workers who were laid off as a result of restructuring received no severance benefits from their former employer. For those companies that provided severance pay, the amount varied, depending on such factors as whether the workers were salaried or hourly employees and the length of time they had been with the corporations. Laid-off workers may also be provided benefits and services that were not funded by DOD. For example, the state of California, through the San Diego Consortium and the Private Industry Council, awarded Martin Marietta $935,000 to assist General Dynamics’ laid-off employees. Each of the combinations that sought payment for restructuring activities was required to demonstrate that DOD’s share of the estimated savings from the restructuring would exceed DOD’s share of the projected costs. Overall, DOD estimates that it should realize a net savings of about $3.3 billion from restructuring activities (see table 4). DOD’s figures indicate that for each dollar of restructuring costs it expects to pay, it will receive about $4.81 in benefits. However, not all of the reported savings may be directly attributable to restructuring. DCAA’s guidance on auditing restructuring proposals may not provide sufficient criteria to ensure that the proposed savings are directly due to restructuring. DCAA’s guidance discusses at length factors to consider in evaluating proposed costs, but it provides far less guidance on evaluating savings. Relative to evaluating projected restructuring savings, the guidance notes that contractor restructuring efforts are intended to result in the combinations of facilities, operations, or workforce that eliminate redundant capabilities, improve future operations, and reduce overall costs. It further notes that it is the contractor’s responsibility to establish and support the reasonableness of the baseline to measure restructuring savings, but notes that various techniques can be used to do so. Finally, the guidance requires DCAA auditors to ensure that the estimates of future savings are reasonable and not due to other factors, such as changes in inflation or interest rates. This broad framework may result in DOD’s accepting proposed savings that are not directly attributable to restructuring. For example, as part of our ongoing work at Lockheed Martin’s Space and Strategic Missiles sector, we attempted to isolate the effects of restructuring from nonrestructuring- related activities. The overall savings from this sector are considerable, amounting to about 43 percent of the total amount of projected restructuring savings from the seven combinations in our review. Of the savings accepted by DOD for certification purposes, about $489 million was attributed to increased operational efficiencies at one location through the adoption of improved business practices. Contractor officials acknowledged that some of the improvements and associated savings could have been implemented without restructuring, noting that the contractor had various efforts to improve its operational efficiency underway or planned prior to restructuring. However, these officials believed that the business combination provided the means to overcome organizational and cultural barriers that might otherwise have hindered these efforts. A senior Lockheed Martin official emphasized that the merger provided the company a unique opportunity to evaluate and implement the best practices from four Lockheed and Martin Marietta facilities. DCAA officials told us that during their audit of the restructure proposal for certification, they did not consider whether such savings could have been accomplished in the absence of restructuring. They noted, however, that they did not believe that DCAA’s guidance provides sufficient criteria to distinguish savings attributable to restructuring from those savings that would have occurred regardless of the restructuring. The Department of Justice and the Federal Trade Commission face a similar issue during their reviews of proposed mergers and acquisitions. In April 1997, the agencies issued revised guidance that discusses the types of efficiencies they consider germane to their reviews. In general, while the agencies will consider savings as part of their analysis, these agencies consider only those savings that are specific to the merger and that are unlikely to be accomplished in the absence of the merger. For example, the guidance notes that efficiencies resulting from shifting production among facilities formerly owned by the separate firms are more likely to be related to the merger. On the other hand, the guidance notes that other efficiencies, such as those relating to management improvements, are less likely to be specifically related to the merger. In our view, while evaluating proposed savings requires flexibility and the use of professional judgment, reflecting a similar approach in DCAA’s guidance would provide a better depiction of the impact of restructuring activities. While DOD reports to Congress its estimates of whether savings associated with each business combination actually exceed restructuring costs, it acknowledges that making accurate estimates is inherently difficult. DOD reported that, as of August 1997, it had reimbursed defense contractors approximately $294.3 million in restructuring-related expenses, while it estimated that savings of about $2.2 billion had been realized (see table 5). As a result, DOD estimated that it has realized a net benefit of about $1.9 billion, or more than half the $3.3 billion in net savings certified for the seven business combinations. The savings reported by DOD were generally not developed from a detailed analysis of the effect of restructuring on individual contract prices, but rather were calculated using the same or similar methodologies employed during the certification process. Caution should be exercised when interpreting the reported savings. DOD has consistently said that it is inherently difficult to precisely identify the amount of actual savings realized through restructuring activities several years after the initial estimate. For example, DOD has stated it is not feasible to completely isolate the effects of restructuring from such other factors as fluctuations in a contractor’s business base, changes in the inflation rate, accounting system changes, subsequent reorganizations, and unexpected events, which also impact a contractor’s cost of operations. Recognizing such difficulties, DOD initially agreed that it would not require validation of the projected savings for two business combinations, noting in one agreement that such a validation was not practical because of business dynamics and future uncertainties. However, DOD estimates actual savings resulting from these two business combinations in response to the reporting requirement. The difficulty in isolating the effect of nonrestructuring activities and their impact on estimating savings is illustrated by restructuring activities following Martin Marietta’s acquisition of General Dynamics Space Systems Division. In this case, DCAA used a different business base in estimating actual restructuring savings than was used to estimate the certified savings. The use of a different business base led, in part, to DOD’s share of net savings shown in table 5—$137.3 million—being considerably higher than the $88.9 million of net savings expected at certification. DCAA officials told us that they were unaware of any way to isolate changes in Martin Marietta’s current business base to make it comparable to that used during the certification process. The impact of restructuring savings on DOD’s budget requirements has been limited. Projected savings constituted a small percentage of DOD’s budgets and were generally not considered by DOD officials in formulating budget requests. Also, even when restructuring activities influenced a weapon system’s cost, the impact was often offset by nonrestructuring- related events or used to fund other program-related needs. DOD’s estimate of restructuring savings—which includes those savings that may not be directly related to restructuring—represents a cumulative amount of savings, often spread over a 5-year period, for each business combination. Overall, DOD estimated it would realize a net savings of about $3.3 billion between 1993 and 2000. In comparison, DOD’s approved or projected budgets for research and procurement totaled more than $658 billion over that same period. Consequently, DOD’s share of certified savings constitutes less than 1 percent of DOD’s budgets. A senior DOD budget official stated that DOD generally has not considered restructuring savings when formulating its budget requests and relied on the individual program offices to do so. He acknowledged, however, that DOD’s budget guidance does not specifically require the program offices to consider restructuring savings. This official also told us that the one exception that he was aware of involved Raytheon’s recent acquisition of Hughes’ defense business, for which DOD considered reducing the projected budgets for the advanced medium range air-to-air missile (AMRAAM), a joint Air Force/Navy program, and the Navy’s standard missile program to reflect anticipated savings. Regarding the AMRAAM, Air Force officials argued that any savings that resulted from the business combination were needed to fund future programmatic needs. According to Navy officials, the savings were used to budget for additional missiles for both programs. DOD subsequently agreed not to reduce the proposed budgets to reflect restructuring savings. Our work provides two other examples of how restructuring activities influenced the costs of major weapon systems without directly affecting budgetary requirements. For example, following the Northrop Grumman business combination, several restructuring activities, including closing Grumman’s former headquarters in Bethpage, New York, were undertaken. Consequently, the amount of corporate overhead costs allocated to Northrop Grumman’s business units, including the B-2 program, was less than projected before the business combination. B-2 program officials told us that no adjustments were made to the B-2 program’s estimated costs or future budget requests due to restructuring. According to these officials, projected savings from the combination may have been reflected in new overhead rates, which would then be used in preparing new contract proposals or finalizing overhead rates on existing flexibly priced contracts. In fact, the B-2’s general and administrative overhead rate—to which corporate overhead costs are allocated—actually rose significantly from 1993 to 1996, due principally to the decrease in the planned procurement of B-2s. Consequently, while the lower corporate overhead costs resulted in the B-2’s general and administrative overhead rate being slightly lower than it would have been without the restructuring, the changes in planned procurement more than offset the impact of restructuring. Similarly, the Air Force’s Titan IV launch vehicle program was affected by the Martin Marietta-General Dynamics and Lockheed-Martin Marietta business combinations. According to Lockheed Martin, restructuring activities resulted in a benefit of over $600 million to the Titan IV program. Titan IV program officials agreed that restructuring activities reduced projected program costs, but indicated that it was not possible to precisely quantify the impact of restructuring. These officials explained that a number of changes were occurring concurrently on the Titan program, including a reduction in the number of launch vehicles and the implementation of various acquisition reform initiatives. Nevertheless, program officials told us that restructuring activities contributed to their ability to absorb congressional, DOD, or Air Force budget cuts or to fund other program-related needs. Our work indicates that DCAA’s guidance does not provide sufficient criteria to evaluate restructuring savings, particularly savings that may have been achievable without restructuring. Estimates based on this guidance may not accurately depict the savings associated with restructuring. Consequently, we recommend that the Secretary of Defense direct the Director, DCAA, to clarify DCAA’s guidance on evaluating restructuring savings. In particular, the guidance should discuss how to evaluate proposed savings based on activities that were ongoing or planned prior to restructuring or that could have been achieved absent restructuring, such as those achievable by management improvements. DOD commented on a draft of the proprietary report. DOD disagreed with our finding that some of the savings it reports may not be directly attributable to restructuring. DOD also disagreed with our recommendation that DCAA’s guidance needs to be clarified. DOD believed DCAA’s current guidance properly implements the legislative requirements. DOD indicated that in reviewing restructuring proposals, it is most concerned with ensuring both that savings exceed costs by the required ratio and that restructuring costs and savings are factored into contract pricing mechanisms as quickly as possible. DOD further noted that when a contractor can demonstrate that savings will significantly exceed costs, there is usually no reason to argue over whether the savings could have been accomplished without restructuring. We agree with DOD that it has established a process to comply with the legislative intent that DOD’s share of projected savings exceeds its projected share of costs, and strongly agree that DOD should ensure that the impact of restructuring is factored into contract pricing mechanisms as quickly as possible. We also believe DCAA’s guidance provides an overall framework to evaluate savings. For example, the guidance states contractor restructuring efforts are intended to result in the combinations of facilities, operations, or workforce that eliminate redundant capabilities, improve future operations, and reduce overall costs. The guidance further states that auditors should ensure that future savings are reasonable and not due to other factors, such as changes in inflation or interest rates. Nevertheless, our work indicates that this broad framework may result in DOD accepting savings that may not be directly attributable to restructuring. At one location at which a considerable amount of savings were proposed due to the adoption of improved business practices, contractor officials acknowledged that some of the improvements and associated savings could have been implemented without restructuring, noting that the contractor had various efforts to improve its operational efficiency underway or planned prior to restructuring. DCAA officials indicated that DCAA’s guidance does not provide sufficient criteria to allow them to question such savings. Ensuring that such savings are related to restructuring would seem a basic element necessary to satisfy the legislative criteria and DCAA’s own guidance. Further, DOD reports annually to Congress on the net savings expected from combinations certified during the preceding year, as well as estimates of savings actually realized. While making such estimates is inherently difficult, the reports should, in our view, attempt to accurately depict the impact of restructuring to the extent possible. Finally, several business combinations have recently announced their intent to restructure, including Raytheon and Boeing. Our discussions with DCAA, Defense Contract Management Command (DCMC), and contractor officials indicated that better guidance as to what constitutes restructuring-related savings would assist in these efforts. Consequently, we believe augmenting the existing criteria with a discussion of the various factors that auditors should consider in evaluating savings is a reasonable request. DOD’s comments are reprinted in appendix I. To determine the amount and nature of restructuring costs, we requested the cognizant DCMC office to provide updated restructuring-related cost and savings information for each of the business combinations in our review. We analyzed this information to determine the amount of restructuring costs incurred for workforce reductions and to identify the costs associated with services provided to assist laid-off workers find reemployment. We did not, however, independently verify the information provided. In assessing restructuring savings relative to the restructuring costs paid by DOD, we relied on the information contained in DOD’s November 22, 1997, report to Congress. We did examine, however, the methodology DCAA used to estimate the amount of restructuring costs paid by DOD and the amount of estimated savings at selected units of business combinations at which we conducted work. To determine the budgetary implications of restructuring savings, we compared DOD’s share of certified restructuring savings to DOD’s actual or projected budgets for the period over which the savings were expected to be realized. We discussed how DOD uses projected restructuring savings in formulating its budget requests with officials from the Office of the Under Secretary of Defense (Comptroller/Chief Financial Officer). We also discussed how projected restructuring savings were used by the Air Force’s Titan IV and B-2 program offices in formulating their budget requests. Finally, we discussed various aspects of the restructuring costs and savings with officials from the business combinations, DOD, DCMC, and DCAA. We performed our review between December 1997 and March 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Commander, DCMC; and the Director, DCAA. We will also provide copies to other committees and Members of Congress upon request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. The major contributors to this report are listed in appendix II. 1. Questions regarding the treatment of proposed restructuring savings have arisen during other certifications. We illustrated it at one business segment because of the large amount of savings there and the concerns expressed by the Defense Contract Audit Agency (DCAA), Defense Contract Management Command (DCMC), and contractor officials regarding the need for additional guidance. However, we did not intend to imply that certification of the overall business combination—which had to demonstrate only that DOD’s share of projected savings exceeded its projected share of costs—was improper. It should be noted that the issue as to what constitutes restructuring- related savings is not limited to the certification process, but also plays a role in DOD’s report to Congress on realized savings. For example, based in part on our work at this business segment, DCAA officials rejected $66 million in savings the contractor claimed on one program. While DCAA rejected more than $124 million overall at this location, DCAA officials told us the absence of clear criteria precluded them from questioning additional amounts of the claimed savings. 2. We would agree that any costs associated with activities that are not directly related to restructuring should not be subject to the certification process, but rather should be reviewed under normal auditing practices. As with savings, eliminating costs that are not restructuring-related would provide a more accurate depiction of restructuring activities. 3. We did not intend to suggest that DOD should adopt the joint guidance issued by the Department of Justice and the Federal Trade Commission per se, but rather we used the joint guidance to illustrate an approach that DCAA should consider in revising its guidance. We have revised the text accordingly. Dorian R. Dunbar Kenneth H. Roberts Thaddeus S. Rytel, Jr. Ruth-Ann Hijazi Donald Y. Yamada The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO provided information on restructuring costs of defense contractors involved in business combinations since 1993, focusing on the: (1) specific costs associated with workforce reductions; (2) services provided to workers affected by business combinations; (3) savings reached from the business combinations relative to the restructuring costs paid by the Department of Defense (DOD); and (4) budgetary implications of reported restructuring savings. GAO noted that: (1) the seven business combinations estimated they had spent $1.2 billion at the same time of GAO's review for such restructuring activities as the disposal and relocation of facilities and equipment, consolidation of operations and systems, relocation of employees, and workforce reductions; (2) severance pay constituted the majority of these expenses, with less amounts provided for temporary health benefits and outplacement services; (3) outplacement services included career transition workshops, resume development, career counseling services, job listings, and information on state and federal programs; (4) overall, the business combinations reported that about 18,000 workers or positions were eliminated due to restructuring activities; (5) DOD estimated it would realize a net benefit of about $3.3 billion from certified restructuring activities; (6) further, DOD estimated that as of August 1997 it had realized a net savings of about $1.9 billion, or more than half of the certified amount; (7) however, DOD's figures may overstate the amount that is directly attributable to restructuring; (8) the lack of specific DOD guidance on evaluating savings may contribute to this condition; (9) caution should be exercised when using or interpreting estimates of restructuring savings; (10) in a budgetary context, the $3.3 billion of estimated restructuring savings represents a cumulative amount of savings for each business combination, often spread over a 5-year period; (11) such savings constituted less than 1 percent of DOD's research and procurement budgets over the period for which the savings were projected; (12) with one exception, DOD officials told GAO they did not consider restructuring savings when formulating DOD's budget requests; (13) the one case cited by DOD involved two Air Force and Navy missile programs; (14) while DOD had initially proposed reducing the programs' budgets to reflect anticipated restructuring savings, DOD subsequently agreed with the military services that the projected savings were needed to fund other program-related needs; and (15) in cases in which restructuring activities influenced a particular weapon system's cost, projected savings were often offset by nonrestructuring-related events.
The Constitution vests Congress with the authority to conduct the decennial census in such manner as it determines, and Congress in turn has granted the Secretary of Commerce (and by delegation, the Director of the Census Bureau) considerable latitude in carrying out the census. In counting the nation’s population, it is important for the Bureau to stay on schedule, as the Secretary of Commerce is statutorily required to (1) conduct the census on April 1 of the decennial year, (2) report the state population counts to the President for purposes of congressional apportionment by December 31 of the decennial year, and (3) send population tabulations to the states for purposes of redistricting no later than 1 year after the April 1 census date. To meet these mandated reporting requirements, census activities need to take place at specific times and in the proper sequence. As Census Day approaches, the tolerance for any operational delays or changes becomes increasingly small. Throughout its history, the Bureau has mostly relied on its in-house capabilities to conduct the decennial census. However, the 2000 Census marked the first time the Bureau relied on contractors to perform a large number of major decennial activities. For example, the Bureau awarded a data capture contract—to scan more than 100 million questionnaires, capture and read that data, and send the information to headquarters for additional processing—to TRW, and awarded the advertising firm of Young & Rubicam a contract to develop an outreach and promotion campaign. Although the contractors generally performed well, Commerce’s Office of Inspector General identified several shortcomings. For example, incomplete quality assurance procedures for the Bureau’s printing contracts led to one contractor printing and mailing out approximately 20 million misaddressed letters informing households that the decennial questionnaires would soon follow, resulting in unnecessary negative publicity just weeks before the Bureau was to send out census forms. Further, the Inspector General found that the Bureau did not have sufficient program management staff with the training and experience to efficiently acquire systems and manage complex, high-dollar contracts. As a result, the Bureau incurred higher costs than necessary. For example, costs for the data capture system increased from a projected $49 million at the time of contract award in 1997 to $238 million by the end of the decennial because of continually changing and expanding requirements late in the decade. The Commerce Office of Inspector General recommended that for the 2010 Census, the Bureau would need a sufficient number of highly skilled and properly trained personnel dedicated to the planning and management of decennial contracts. The Bureau has awarded three of its seven major decennial contracts on time, and is working to accomplish contract milestones for these three and preparing for the award of the remaining four contracts. However, the tight systems development and testing schedule coupled with the interdependence of decennial systems may affect the Bureau’s ability to meet its ambitious schedule for completing the testing necessary for a successful census. As shown in table 1, the Bureau has awarded three of its seven major decennial contracts on time, and is working to accomplish contract milestones for these three and preparing for the award of the remaining four contracts. However, the Bureau has pushed back the award dates of two of the remaining four contracts because of changes in its acquisition approach for the contracts (additional detail about each of the seven contracts is presented in apps. III through VII). Going forward, it will be important for the Bureau to stay on schedule so that key systems can be demonstrated in concert with one another as part of the 2008 Dress Rehearsal. The MTAIP contract, for about $209 million, was awarded in June 2002 to the Harris Corporation (Harris). Harris is to correct in the Bureau’s geographic information system, called the Topologically Integrated Geographic Encoding and Referencing (TIGER) database, the location of every street, boundary, and other map feature so that coordinates are aligned with their true geographic locations. Our review of Bureau documents indicates that Harris is meeting expected schedule and cost targets for the MTAIP contract. According to Bureau documents, Harris completed work for 75 counties in fiscal year 2003, as was planned for the first year of production for the contract. Bureau documents also show that in fiscal years 2004 and 2005, Harris was both on schedule and within budget, completing 602 counties in 2004 and 623 counties in 2005. Similarly, for the first 2 months of fiscal year 2006, Harris was also on schedule and within budget. Bureau plans call for Harris to finish its work for all remaining counties by the end of fiscal year 2008. The DRIS contract was awarded in October 2005 to Lockheed Martin and is expected to cost more than $500 million. Bureau officials told us that work on the DRIS contract is slightly behind schedule. The implementation of the DRIS contract was pushed back by 60 to 90 days, according to Bureau officials, because of a bid protest that was ultimately withdrawn. Bureau officials told us they did not expect this change to substantially affect the contractor’s ability to complete the work as planned. DRIS staff are working to adjust the schedule for the first few months of the contract to accommodate the change. The Bureau awarded the FDCA contract to Harris for an estimated cost of $600 million. Although the award date was consistent with its schedule, the Bureau had revised the original award date for FDCA from late 2005 to March 2006 to enable multiple offerors to develop and test prototypes of the mobile computing device that will be used by enumerators during their fieldwork. The Bureau held a 3-day field demonstration in January 2006 to evaluate the prototype, and considered the results as part of the process for selecting a contractor. Bureau officials with responsibility for FDCA believe this strategy had multiple advantages. For example, they believe the development of a prototype prior to contract award increases the likelihood of having a working device in time for the first operation of the 2008 Dress Rehearsal. Of the four remaining contracts, the Bureau has also revised the original award dates for two but expects to award the contracts for printing and field office leasing according to its original schedule. The two contracts for which the Bureau has pushed back the award dates are the DADS II contract to replace the Bureau’s data tabulation and dissemination system and the 2010 Communications contract to advertise and promote the 2010 Census. The Bureau has twice changed the DADS II award date and contract scope. It originally planned to establish a new Web-based system that would serve as a single point for public access to all census data and integrate many dissemination functions currently spread across multiple Bureau organizations. The Bureau had planned to award that contract in the fourth quarter of fiscal year 2005. However, due to fiscal and resource constraints, the Bureau decided against investing in this integrated approach and opted instead to rely on contractors to enhance the DADS system used for the 2000 Census. The Bureau planned to release a RFP for DADS II on February 27, 2006, and to award the contract in August 2006. On March 8, 2006, however, the Bureau announced its plan to delay the release of the RFP by 6 months to gain a clearer sense of budget priorities before issuing a delegation of procurement authority. The Bureau also changed its plan to acquire a contractor to maintain and enhance the system used for the 2000 Census. In its draft RFP for the DADS II contract, the Bureau noted that because the system used in 2000 was becoming obsolete, it planned to revert back to its original plan to acquire an integrated system. The Bureau currently estimates it will delay the award of the DADS II contract from August to October of 2006. The Bureau had also originally planned to award the 2010 Communications contract in October 2006—earlier in the decade than for Census 2000, when the Bureau awarded its advertising contract in October 1997—but has decided to do so at a later date because it is still researching various approaches to the acquisition. Bureau officials told us they plan to award the contract during the 2007 calendar year. They also told us that the contract is currently on track. Condct 2004 Census Tet (2 field ite) Condct 2005 Ntionl Census Tet (content nd repone option) Condct 2006 Census Tet (2 field ite) Condct dress rehesand egin to implement 2010 Census opertion(e.g., egin opening field office) Contine to implement opertion(e.g., condct ddressnvassing) 2010 Census major contract award schedule Moreover, several of the Bureau’s key decennial systems—both those developed by contractors and those developed by the Bureau itself—will need to exchange data (or interface) with each other to carry out decennial operations, as illustrated in figure 2. The decennial system is comprised of many systems that must work in concert and rely on one another. Because of these interdependencies, these various systems need to stay on schedule during the development phase. For example, data collected by the mobile computing devices supplied under the FDCA contract need to be processed by the data capture system provided by the DRIS contractor to be consistent with data from other sources, such as the Internet or telephone. More broadly, the principal census-taking activities and systems need to be sufficiently mature so they can be demonstrated in concert with one another as part of the 2008 Dress Rehearsal. Based on the Bureau’s past experience, a true dress rehearsal—which requires the Bureau to specify all design features by 2007—is critical for meeting the Bureau’s goals and objectives. We previously reported that during the 1998 Dress Rehearsal for the 2000 Census, a number of new features were not test-ready; as a result, the Bureau said it could not fully evaluate with any degree of assurance how they would affect the census. These late design changes and hastily developed untested systems resulted in additional costs to that census. For the 2010 Census, changes to the acquisition milestones of both the FDCA and DADS II contracts affected the testing programs for both of those systems. For example, as the Commerce Office of Inspector General concluded in a recent report, delaying FDCA time frames reduced the amount of time after contract award to complete the remainder of the work needed to prepare for, and begin, the dress rehearsal. Moreover, pushing back the award date resulted in a missed opportunity for the FDCA contractor to observe the real-time use of the mobile computing devices for address canvassing in 2005 as part of the 2006 test. According to the Inspector General, observations of the 2006 Test could have provided the contractor with a level of understanding of key census-taking operations that would have been difficult to obtain in any other fashion. Additionally, the DADS II system will not be developed in time to be fully tested during the 2008 Dress Rehearsal, partly due to the delay in its acquisition milestones. Moreover, because the Bureau moved the release date for the RFP from February to August 2006 and plans to award the contract in October 2006, the time frame the Bureau now has to prepare for awarding the contract has been compressed from 6 to 2 months. In 2 months, the Bureau has to (consistent with planning activities leading up to contract award by governments acquiring systems) prepare for and evaluate responses, conduct supporting negotiations, and recommend a contract award, among other activities involved in selecting a contractor. In planning its major acquisitions for the 2010 Census, the Bureau has generally adhered to the five leading practices for acquisition planning we selected (see fig. 3). However, additional efforts are needed within two of these practices in the Bureau’s activities leading up to contract award. As part of its strategic planning process (practice 1), the Bureau needs to complete its plan for integrating its major decennial systems. Further, in planning for its decennial acquisition workforce (practice 5), the Bureau needs to fully implement key principles of strategic workforce planning. In the years ahead, it will be important for Bureau management to follow these leading practices to successfully plan for and award its remaining contracts for the development of mission-critical systems to support activities for the 2010 Census. Leading results-oriented organizations that rely on acquisitions to accomplish their missions use strategic plans to align the activities of individual contractors with the organizations’ overall objectives. Linking an organization’s acquisition activities to specific program goals is particularly important for the census, where various systems have to work seamlessly and in the right sequence. For example, the National Academy of Sciences reported that during the 2000 Census, weaknesses in the Bureau’s strategic planning for major systems developed by contractors led to a patchwork of information systems that were costly, complex, and high risk. For the 2010 Census, the Bureau has developed a strategic plan linking some activities to be performed by contractors to the Bureau’s program goals. To enhance its planning process and improve systems efficiency, the Bureau is developing a 2010 Census Architecture, which is a blueprint of its business process, data, applications and interfaces, and the technologies needed to efficiently conduct the census. This architecture will also serve as the basis on which the Bureau and its contractors will build systems necessary to complete the 2010 Census. The National Academy of Sciences has endorsed the Bureau’s development of the 2010 Census Architecture and noted that its full use has the potential to greatly reduce risk in system development and enable the various information subsystems of the census to communicate effectively with each other. Within this architecture, the Bureau has several documents that detail its plans to produce a census that achieves its program goals for the reengineered 2010 Census. Although these documents do not specifically identify contracts, they do link activities that will be performed by contractors to achieving specific program goals. For example, the 2010 Baseline Design specifies that automation and use of mobile computing devices—to be provided by the FDCA contractor—will significantly reduce the amount of paper used in the field. It will also cut down on the large number of staff and the office space required to handle that paper, thereby also reducing the cost of the census. Likewise, in a budget document submitted to the Office of Management and Budget, the Bureau also links contracted activities to decreased workload and costs. The Bureau is planning for the integration of DRIS, FDCA, DADS II, and its other information technology systems. Successful systems integration involves almost every aspect of the project and reaches from the very beginning through the maintenance phase of a system’s life cycle. To facilitate this planning, the Bureau will use the 2010 Census Architecture to coordinate technical planning for systems integration. As part of this architecture, the Bureau has developed the Physical Architecture, which specifically identifies which systems need to exchange data or interface with one another. Contractors will be required to follow this document as they develop interoperable systems. Bureau officials stated that they plan to finalize the Physical Architecture by the spring of 2006. As the Bureau continues its testing and development for the 2010 Census, it will be important for it to fully develop and carry out its plan to integrate its decennial systems. The Bureau has taken the responsibility of managing systems integration itself. Therefore, it needs to provide each contractor with the information needed to enable the systems they develop to work in concert with other decennial systems. Bureau officials indicate that they intend to define these information needs after all major information technology contracts have been awarded and will implement a joint effort with the Bureau’s contractors and in-house developers to integrate its systems development schedules at that time. However, the Bureau has not yet established a schedule for defining this information that needs to be shared with contractors or other census teams for their development of decennial systems. To successfully provide this information on schedule so as to ensure the successful integration of decennial systems, the Bureau—in its role as the systems integrator— should establish a schedule to define interfaces between all decennial systems so that the interface information can be provided on a timely basis to development teams. Consistent with the leading acquisition planning practice of strategically planning for contracts, the successful integration of decennial systems is a key factor in the Bureau’s ability to meet its internal milestones. This integration will decrease the chance for unanticipated cost increases as well as technical and programmatic risks. Agencies relying on contractors should monitor planning activities leading up to contract award so that appropriate corrective actions can be taken if the process begins to deviate from plan. These planning activities involve (1) planning for and performing the actions necessary to develop and issue a solicitation package, (2) preparing for the evaluation of responses, (3) conducting an evaluation, (4) conducting supporting negotiations, and (5) making recommendations for award of the contract. Without appropriate monitoring of acquisition planning, agencies run the risk of delaying contract award and other contract milestones, which can result in acquisitions becoming more costly than necessary. The Bureau has monitored activities leading up to contract award for the three major contracts it has awarded and is monitoring its acquisition planning for the remaining four major contracts. For two of its awarded contracts—MTAIP and DRIS—the Bureau has established acquisition project schedules and processes, while also tracking whether its acquisition activities are performed on time through the maintenance, review, and inspection of detailed contract files. The Bureau was relatively close to meeting the dates specified in its contracts’ revised planning schedules for the issuance of the MTAIP and DRIS RFPs and subsequent award of those contracts. The Bureau has also been monitoring the planning process for the award of its remaining major decennial contracts. Continued monitoring of contractor performance after contract award will also factor heavily into the success of major decennial contracts. For example, in our March 2006 testimony focusing on the DRIS and FDCA contracts, we noted that several plans needed for post-award contract monitoring for the two contracts, such as detailed performance measures for tracking the contractor or the Bureau’s own internal progress, were not yet developed. While the Bureau does not have a policy requiring such plans to be completed prior to contract award, not having them in place could limit the Bureau’s ability to determine when performance deviates from expectations and could increase the risk of delays in identifying problems with the project and taking appropriate corrective actions. In our previous work, we found that engaging relevant stakeholders and empowering them to coordinate acquisition actions help agencies to better define their needs and to identify, select, and manage providers of goods and services. For the inputs of stakeholders to be useful during the acquisition planning phase, careful selection of relevant stakeholders is necessary. A plan for stakeholder involvement should include a list of relevant stakeholders, the roles and responsibilities of the relevant stakeholders, and a schedule for stakeholder involvement. The Bureau, in its evaluations of the 2000 Census, reported that it could have had greater involvement from internal division stakeholders in its planning process. Likewise, the Commerce Office of Inspector General found that inadequate stakeholder participation—namely, the lack of coordination between the General Services Administration (GSA), the contractors GSA managed, and Bureau staff—resulted in many wasted hours of government employee time and increased contractor cost on the contract involving the opening of over 500 local offices during the 2000 Census. For some decennial contracts, the Bureau developed plans that include a list of relevant stakeholders, their roles and responsibilities, and schedules of when the involvement of each is needed. For example, the project management plan for the FDCA contract includes a strategy to communicate between internal and external stakeholders and the different management and technical teams that will provide oversight of the FDCA contract. It also details specific roles and responsibilities for individuals within project teams that will support the management and technical activities for the FDCA contract. In another example, the charter for the DRIS acquisition review team details the composition of the team, membership responsibilities, and guidelines in reviewing the acquisition of the system. After contract award, Bureau attention to stakeholder involvement will remain important. For example, each participant’s role in post-contract- award activities should be clearly defined and shared among stakeholders for each contract. We noted in our testimony evaluating the Bureau’s progress on the DRIS contract that in at least one case, the Bureau has not yet obtained written stakeholder buy-in on a project plan for managing the contract. An agency’s increased reliance on contractors may result in changes to its business processes that can adversely affect staff and the performance of the contractor. For example, a 2003 IBM study found that during the 2000 Census, some Bureau employees felt threatened by the presence of contractors because they believed that their roles and responsibilities had been taken away from them. Additionally, the Bureau did not have established processes to transfer knowledge and information from Bureau personnel to contractors. This lack of effective communication created tensions and engendered a less-than-constructive working relationship between contractors and Bureau staff, according to IBM. Moreover, the study found that because Bureau employees did not know how to properly define contractual requirements and deliverables, there were cost overruns. For the 2010 Census, the Bureau has planned several needed changes to its business processes. For example, to improve how it defines contractual requirements and deliverables, project teams led by the Bureau’s Decennial Management Division are to oversee the development and management of requirements for particular operations and associated contracts. The teams will also work in conjunction with contractors to facilitate the understanding and execution of system requirements. To improve communication between Bureau and contractor staff, Bureau officials are relying on the 2010 Census Architecture to provide a formal means of sharing processes and requirements with contractors. Other Bureau officials have observed that the sharing of 2010 Census Architecture work products with contractors that has occurred to date has already resulted in improvements: the Bureau received better proposals from potential contractors, better conveyed its systems needs to contractors during the RFP phase, and had a means to provide answers to contractors’ inquiries about systems specifications. Agencies that rely heavily on acquisitions to accomplish their missions stand to benefit greatly by planning strategically for their acquisition workforces. In a previous report, we noted that this planning should include developing a strategic workforce plan that defines the capabilities that will be needed by the acquisition workforce in the future, as well as strategies that can help this workforce meet these capabilities. During the 2000 Census, the Bureau experienced some difficulty managing its contracts because of a lack of skilled acquisition and contract- management personnel. For example, the Commerce Office of Inspector General reported that, because the Bureau’s Decennial Systems and Contracts Management Office lacked staff with the experience needed to manage large-scale contracts, the Bureau did not prepare a written contract surveillance and management plan when it awarded a contract to a firm to help respondents complete their census questionnaires over the telephone. (Surveillance and management plans describe the responsibilities, roles, and interactions among the program office, contracting officer, and contractor.) Although the Department of Commerce, in commenting on a draft of this report, noted that the Bureau carried out these surveillance and management activities without a written plan, a written plan would have provided greater assurance that the contracts were (1) executed successfully, (2) not changed without authorization, and (3) that the contractor performs as expected. For the 2010 Census, the Bureau continues to face acquisition workforce challenges. Senior officials told us that the agency lacks and has trouble recruiting qualified acquisition personnel with the necessary experience and skills to award and oversee complex contracts. Additionally, the Bureau has not strengthened the monitoring of its mission-critical workforce more closely and at a higher level, as we noted in a June 2005 report. (According to a Commerce planning document, the Bureau considers its decennial acquisition workforce to be mission-critical.) For example, the Bureau did not identify its decennial acquisition workforce in its overall human capital management plan, nor did it solicit the input of the Acquisition Division in developing that plan. An April 2005 Office of Management and Budget policy letter to federal departments and agencies underscores the importance of this type of planning by requiring high-level acquisition officials to provide substantial input to their agency’s human capital strategic plans regarding the acquisition workforce. We have previously identified five key principles that strategic workforce planning should address: (1) involving top management, employees, and other stakeholders in developing and implementing the workforce plan; (2) determining critical skills and competencies needed to achieve programmatic results; (3) developing strategies tailored to address gaps in critical skills and competencies; (4) building the capability needed to address administrative, educational, and other requirements important to support workforce strategies; and (5) monitoring and evaluating the agency’s progress toward its human capital goals. The Bureau has incorporated some key strategic workforce planning principles in planning for its acquisition workforce, but primarily at a division level. Divisions within the Bureau that have responsibility for acquisition-related staff have independently implemented certain strategic workforce planning actions, including working to determine the critical skills and competencies needed to award and manage decennial contracts and developing strategies to have adequate skilled staff in place in time for the decennial. For example, as part of its workforce planning, the Decennial Systems and Contracts Management Office retained a contractor to conduct a study of what grades, competencies, and skills were needed to effectively manage the DRIS contract. Bureau divisions are also turning to formal training to enhance the capabilities of their staff. For instance, the Decennial Management Division is requiring some of its employees to take project management or contracting officer’s technical representative training. Likewise, the Decennial Systems and Contracts Management Office has trained some of its staff in program management as well as in the development of enterprise architecture. At an agencywide level, the Bureau has taken some initial steps to identify the skills and competencies needed to manage contracts, but more could be done. For example, in the Bureau’s strategic human capital plan, the Bureau acknowledges that project and contract management are among the new skills required for its staff for the reengineering of the 2010 Census. To build the capacity to help staff obtain these and other skills, the Bureau has established a Project Management Master’s Certificate Program and an Information Technology Master’s Certificate Program, and has developed competency guides as well. According to Commerce, these certificate programs, initiated in 1998, are a way to develop the management and leadership skills needed in mid-to-senior level career employees to successfully oversee Bureau operations well beyond the 2010 Census. However, the Bureau still lacks an agencywide approach to strategically planning for its acquisition workforce. First, as we previously noted, the Bureau does not assess or monitor gaps in numbers by mission-critical occupation at an agencywide level. Instead, it focuses on “building infrastructure” by recruiting and developing competencies. The Bureau delegates decisions to line managers to fill vacancies, and believes there is no need to assess workers by mission-critical categories. In not performing this agencywide assessment, the Bureau cannot monitor its mission-critical occupations related to acquisitions more closely and at a higher level within the agency. As a result, it may not know overall if it has the acquisition-related competencies it needs in place agencywide to be prepared for conducting the 2010 Census as efficiently or effectively as possible. Second, the Bureau has not identified the needs of its decennial acquisition workforce in its agencywide human capital management, nor has it developed a separate plan specific to the acquisition workforce that identifies these needs. Further, according to Bureau officials in the Acquisition Division, their input was not sought in the development of the Bureau’s existing human capital management plan. This lack of high-level attention to the decennial acquisition workforce in the Bureau’s strategic human capital planning process is notable, especially in light of the Bureau’s challenges of recruiting qualified acquisition personnel. It will be important for the Bureau to address the needs of its acquisition workforce in its agencywide human capital management plan or a separate plan and to involve the Acquisition Division in this planning effort. Taking these actions would help facilitate a better alignment between the acquisition workforce and the demands brought on by the Bureau’s greater reliance on contractors for the successful conduct of the 2010 Census. As the 2010 Census approaches, the Bureau faces the challenge of managing its extensive network of contractors to perform mission-critical operations. The Bureau is well aware that early planning, testing, and development will help facilitate a successful decennial census. Acquisition planning plays a key role in that process and provides a road map the Bureau can use to manage its contracts to increase the likelihood of timely deliverables at reasonable cost. Overall, progress on the seven major decennial contracts is moving forward. Still, as Census Day 2010 draws closer, it will become increasingly difficult for the Bureau to make up any time lost to delays. Already, aspects of the Bureau’s DADS II system will not be assessed in the dress rehearsal because of a change in the contract’s acquisition milestones, while changes to FDCA time frames have reduced the amount of time the Bureau will have to complete the work needed to prepare for, and begin the dress rehearsal. Further, to help the contractors stay on track, Bureau officials will need to document a schedule for when information needs to be exchanged between contractors and census teams working to develop these interoperable systems for the 2010 Census. The Bureau also needs to pay attention to strategically—and at an agencywide level—managing the human capital planning for its acquisition workforce. To help the Bureau improve the management of the 2010 Census, we recommend that the Secretary of Commerce direct the Bureau to take the following three actions: Ensure that the key systems to be developed or provided by contractors for the 2010 Census are fully functional and ready to be assessed in concert with other operations as part of the 2008 Dress Rehearsal. Establish a schedule for the definition of interfaces between all decennial systems so that these data can be provided on a timely basis to development teams. Devote further attention to planning strategically for its decennial acquisition workforce by (1) assessing, at a higher level within the agency, whether it has the acquisition-related skills needed to conduct the 2010 Census by developing strategies to identify and address gaps, monitoring and evaluating progress toward closing gaps, and adjusting strategies accordingly; and (2) identifying the needs of the acquisition workforce in its human capital management plan or another acquisition-specific workforce plan and involving appropriate stakeholders in this planning effort. In written comments on a draft of this report, Commerce neither agreed nor disagreed with our recommendations. Commerce commented on aspects of our principal findings and our third recommendation regarding its planning for the decennial acquisition workforce. Its comments included some technical corrections and suggestions where additional context was needed, and we revised the report to reflect these comments as appropriate. Commerce’s comments are reprinted in their entirety in appendix II. Commerce did comment on our first principal finding concerning the Bureau’s readiness for the 2008 Dress Rehearsal. This finding led to our first recommendation for the Bureau to ensure that its key systems are fully functional and ready to be assessed in concert with other operations during the dress rehearsal. Commerce noted that the Bureau provided competitors for the FDCA contract information about the design, requirements, and specifications for the 2006 Test in its RFP (we have now added this information to our report). Commerce also noted that the Bureau will be sharing preliminary results from the 2006 Test with Harris—the firm that was awarded the contract—as soon as the results are available. However, the Bureau did not specify when this might be. Moreover, as we discussed in the report, the mobile computing devices will need to be ready by April 2007, when the Bureau is to use them for the address canvassing operation for the 2008 Dress Rehearsal. Consequently, the contractor will have around a year, perhaps less, to study the results of the 2006 Test; assess what worked and what improvements, if any, are needed; and develop and test any solutions in time to be included in the devices that will be used in 2007. For our second principal finding that the Bureau does not have a schedule for defining what and when information needs to be provided to development teams to better integrate the systems they develop, Commerce did not comment on our recommendation for the Bureau to develop such a schedule, but stated that it was not clear how the Bureau could have had such a schedule prior to awarding the contracts. Commerce further noted that the Bureau plans to implement a joint effort with its contractors and in-house developers to integrate its development schedules. Our report acknowledged that the Bureau intended to define these information needs after it awards the major information technology contracts. We believe that establishing a schedule defining the interfaces between all decennial systems as soon as practical is critical because it allows the Bureau to better manage the process and hold various components accountable to a schedule and thus help ensure the successful integration of decennial systems. In its comments related to our third finding and recommendation for the Bureau to assess the decennial acquisition workforce at a higher level within the agency, Commerce described the actions the Bureau is taking consistent with this recommendation. For example, Commerce reported that high-level Bureau officials will be regularly briefed on the status of each decennial acquisition. Commerce also detailed the steps the Bureau is taking with stakeholders to plan for the needs of the Bureau’s acquisition-related workforce as part of its human capital management plan. Commerce noted that this plan includes input from managers who represent each Bureau directorate. These are important first steps toward addressing our third recommendation. While the Bureau has begun working closely with stakeholders to plan for the decennial acquisition workforce as part of its human capital management plan, it has not yet begun incorporating that information into the plan. As we stated in our report, documenting its decennial acquisition workforce needs in the Bureau’s strategic human capital plan would help facilitate a better alignment between the acquisition workforce and the demands brought on by the Bureau’s greater reliance on contractors for the successful conduct of the 2010 Census. In addition, Commerce commented on information in our report that was obtained from our March 2006 testimony and a 2002 Commerce Office of Inspector General study. Specifically, our report notes that in March 2006, we testified that neither the FDCA nor DRIS contract project offices had the full set of capabilities they need to effectively manage those acquisitions. Commerce commented that full project management offices were not needed to carry out the Bureau’s initial acquisitions and will be staffed in time to effectively manage the contracts. As discussed in the testimony and noted in our report, a full set of capabilities—including the institution of requirements management or risk management processes— are significant factors in successful systems acquisitions and development programs. Having these capabilities in place will also improve the likelihood of meeting cost and schedule estimates as well as performance requirements. Regarding the Inspector General’s study, we noted that the Inspector General found that the cost of the data capture system for the 2000 Census increased almost fivefold by the end of that decennial cycle because of continually changing and expanding requirements late in the decade, and the Inspector General recommended that for 2010, the Bureau would need a sufficient number of trained personnel dedicated to the planning and management of decennial contracts. In its comments, Commerce noted that the issue of changing and expanding requirements must be addressed by program management, and that the Bureau, in its preparations for the 2010 Census, is following practices for rigorous requirements management. We are sending copies of this report to the Secretary of Commerce, Commerce Office of Inspector General, the Director of the U.S. Census Bureau and other interested congressional committees. We will make copies available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me on (202) 512-6806 or by email at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. Our objectives for this report were to (1) determine the status of the U.S. Census Bureau’s (Bureau) major contracts related to the 2010 Census, and (2) evaluate the extent to which the Bureau is using selected leading practices to manage its acquisition planning process for the decennial census. To address our first objective, we reviewed documents related to major 2010 Census acquisitions, including acquisition plans, requests for proposals (RFP), finalized contracts, and budget requests to the Office of Management and Budget. We also reviewed the Bureau’s strategic planning documents, such as its 2010 Census Management Plan, 2010 Census Architecture, and 2010 Baseline Design for Reengineering the Decennial Census. Additionally, we interviewed Bureau officials about the status of and future plans for the major contracts for the 2010 Census (as defined by Bureau officials). Those officials include those from the Decennial Management Division, which is responsible for implementing the decennial census; the Decennial Systems and Contracts Management Office, which manages selected system contracts supporting the decennial census; and the Acquisition Division, which carries out acquisition activities, including setting up and signing contracts, for other Bureau offices. Further, we interviewed an official from the Decennial Information Technology and Geographic Systems division. For the second objective, we identified selected leading acquisition planning practices used in the federal government from a variety of sources. Sources included our own guidance, reports, and testimonies on the acquisition function as well as external works, such as the Capability Maturity Model® Integration (CMMISM) model. The CMMISM model was developed by Carnegie Mellon University’s Software Engineering Institute, recognized for its expertise in software and system processes. The CMMISM model includes criteria to evaluate, improve, and manage system and software development processes. We adapted these CMMISM criteria to evaluate system and software development issues during acquisition planning for the four information technology contracts (Field Data Collection Automation, Decennial Response Integration System, MAF/TIGER Accuracy Improvement Project, and Data Access and Dissemination System II). From these, we selected five leading practices based on the acquisition-related challenges the Bureau faced during Census 2000. The five leading practices we selected focused on management oversight of the Bureau’s acquisition planning process, not on the Bureau’s acquisition strategy for specific contracts or compliance with the Federal Acquisition Regulation. To evaluate the extent to which the Bureau followed these leading practices, we reviewed relevant Bureau documents, such as acquisition plans, strategic planning documents, RFPs, finalized contracts, and budget requests to the Office of Management and Budget; observed some acquisition-related events at the Bureau, including Bureau presentations for potential bidders and contract monitoring meetings; and interviewed knowledgeable Bureau officials about acquisition planning. We focused on the Bureau’s activities to date in planning for its major decennial contracts. Because the Bureau is still planning most of these acquisitions, our review presents findings about current status and plans as reported by Bureau officials or as supported by Bureau documents. We conducted our work from July 2005 through March 2006 in accordance with generally accepted government auditing standards. Appendix III: Improve Details ment Project (MTAIP) Contract The primary goal of the contract awarded to the Harris Corporation (Harris) is to correct information in the Bureau’s repository of the location of every street, boundary, and other map feature (known as the TIGER database) so that coordinates are aligned with their true geographic locations. Harris will also develop a capability to link each of these geographic locations and coordinates on file with a corresponding record to the Bureau’s address list of where people live and work (also known as MAF). large columns of information from many external sources to establish and maintain a current and accurate housing unit address list, boundaries for all governments, address ranges to facilitate geocoding various files, and other map information. 2010 Census: MAF/TIGER will interface with several entities that are external to the program. Some entities include the Decennial Response Integration System (DRIS)— with which it will coordinate to maintain data as geography records are created, updated, or deleted—and the Field Data Collection Automation (FDCA) system, with which it will work to store spatial coordinates for the mobile computing devices provided by the FDCA contractor. F Ct. A St. M St. A Ct. ppendix IV: Decennial Response Integration stem (DRIS) Contract Details The DRIS contractor will be responsible for designing, building, and operating the systems, staffing, and infrastructure to process census data provided by respondents via census forms, telephone agents, the Internet, and enumerators; assist the public via telephone and Internet; and monitor the quality and status of the data capture operations. The DRIS contract does not include providing the systems or staffing used for field enumeration operations. Census 2000: The Bureau procured key elements of its data capture and processing system from different contractors, and it developed in-house a system for collecting data from the Internet. The Bureau was also responsible for integrating data collected through the different modes of capture (paper, telephone, Internet, and field operations). 2010 Census: The DRIS contract will allow the Bureau to have a single contractor design, develop, and implement a system to integrate data from all of the response modes. Activities to date: The Bureau began acquisition planning for DRIS in 2003. Between 2003 and contract award, it performed planning and solicitation activities including conducting research on data capture, developing planning documents, drafting and issuing a RFP, and evaluating proposals from vendors. The Bureau awarded the contract on schedule in October 2005. tomation (FDCA) Contract Details ppendix V: Field Data Collection FDCA will rovide the automtio for the Breau to capre iformtio collected dring ernal iterview, eliminating the eed for paper maps anddress li for ddressanassng anrespnse follow-up. The FDCA cotrctor will rovide office automtio for 12 reional census ceter, the Perto Rico re office, anapproimtel 00 office; the telecommunictionsfrastrctre for hedquarter, reional, and locl office; mobile computing device for field worker; itetio with other 2010 Cnsus system; and develomet, delomet, techicsupport, deinslltio, and dispsaervice. Activitie associted with creting mgement pl nd ddressing in detil the firt mjor phase (Exection Period #1) of FDCA implementtion Activitie propoed y the FDCA contrctor to support field opertion to condct the 200 Dress Rehesal (Mr. 2006 – Mr. 2009) All ctivitie propoed y the FDCA contrctor to support field opertion to condct the 2010 Census (April 2006 – Sept. 2010) Census 2000: Field d collectioas redominantlandled througpaper ddress li, maps, and qtionnaire. 2010 Census: FDCA i degned to support the Breau’s field d collectio ctivitie ano iterfce with the DecennRspnse Itetio System, the Breau’sapre system. Activities to date: The Breau moved the rd dte for FDCA from the ed of the 200ler to Mrch 2006 to enable otetil cotrctor to develo, t their owxpnse, “nrodctio rey” rototypsystem for ddressanassng, which will be evuated as part of rce electio. stem (DADS II) Contract Details Appendix VI: D SyDADS, developed prior to Census 2000, is the Bureau’s system for tabulating and disseminating data from the decennial census and other Bureau surveys to the public. DADS allows users to access prepackaged data products, as well as to build custom reports through its American FactFinder Web site. The Bureau will require the DADS II contractor to replace the legacy DADS system that the Bureau used during Census 2000. 2010 Census: The contractor will be required to provide a replacement of the legacy system to form an integrated solution. The contractor is also expected to provide comprehensive support to the Census 2000 DADS system. Activities to date: The Bureau originally planned to establish a new Web portal for the public to access all census data and to integrate many of the dissemination functions. Because of fiscal and resource constraints, it decided not to invest in this new initiative, but instead to pursue a follow-on acquisition to the DADS program. In March 2006, the Bureau announced that it will delay the release of the solicitation and contract award date to gain a clearer sense of budget priorities before issuing a delegation of procurement authority. It also changed the scope of the contract to ask contractors to develop a replacement to its legacy system. ppendix VII: Summaries of Major Decennial ontracts Planned for Award in 2007 or Later Census 2000 experience: The Bureau awarded a contract to Young & Rubicam, along with four partner agencies, to create and produce an advertising campaign to inform and motivate the public to complete and return the census form. The campaign was consid- ered an operational success, and was one of the factors that helped boost response rates. 2010 acquisition plans: While the scope of the 2010 Communications contract has not yet been determined, it may include some or all of the components of the 2000 Communi- cations Program (advertising, media relations, and special events, among others). The Bureau plans for it to be part of an integrated communications program where all components work together to motivate participation in the 2010 Census. Agency managing contracts: Government Printing Office (GPO) Estimated cost: To be determined Award date: Estimated March 2007 (contract for major operations); November 2008 – June 2009 (other printing contracts) Census 2000 experience: The Bureau used GPO to contract for decennial printing jobs, such as the printing of questionnaire packages, field follow-up forms, and promotional materials. GPO then selected dozens of private sector companies to perform the work. Contracts for printing Census 2000 material totaled over $65 million and included the printing of almost 400 million items. 2010 acquisition plans: The Bureau will again work with GPO to contract for its printing needs. In March 2007, the Bureau plans to award one major contract for one of its most complex printing operations, including the printing of questionnaires and other materials for the mail out/mail back and replacement mailing operations. Contracts for the printing of other products, such as military census forms and reminder postcards, will be awarded separately beginning in late 2008. Census 2000 experience: The Bureau and GSA formed a partnership that obtained leases, oversaw the build-out construction of offices, arranged for security and for data and voice telecommunications, and provided office equipment and supplies. All local census offices were successfully opened in sufficient time to conduct operations. 2010 acquisition plans: As in Census 2000, the Bureau has established an interagency agreement with GSA to structure lease agreements. The Bureau expects to lease 12 regional census centers, 1 Puerto Rico area office, and approximately 500 local census offices nationwide and in Puerto Rico. In addition to the contact named above, Robert Goldenkoff, Assistant Director; Betty Clark; Shirley Hwang; Anne McDonough-Hughes; and Brendan St. Amant made key contributions to the report. Tim DiNapoli, Richard Donaldson, Richard Hung, John Krump, Donna Miller, and Amy Rosewarne provided significant technical support. 2010 Census: Planning and Testing Activities Are Making Progress. GAO-06-465T. Washington, D.C.: March 1, 2006. Census Bureau: Important Activities for Improving Management of Key 2010 Decennial Acquisitions Remain to be Done. GAO-06-444T. Washington D.C.: March 1, 2006. Data Quality: Improvements to Count Correction Efforts Could Produce More Accurate Census Data. GAO-05-463. Washington, D.C.: June 20, 2005. Information Technology Management: Census Bureau Has Implemented Many Key Practices, but Additional Actions Are Needed. GAO-05-661. Washington, D.C.: June 16, 2005. 2010 Census: Basic Design Has Potential, but Remaining Challenges Need Prompt Resolution. GAO-05-9. Washington, D.C.: January 12, 2005. Data Quality: Census Bureau Needs to Accelerate Efforts to Develop and Implement Data Quality Review Standards. GAO-05-86. Washington, D.C.: November 17, 2004. American Community Survey: Key Unresolved Issues. GAO-05-82. Washington, D.C.: October 8, 2004. 2010 Census: Cost and Design Issues Need to Be Addressed Soon. GAO- 04-37. Washington, D.C.: January 15, 2004. Decennial Census: Lessons Learned for Locating and Counting Migrant and Seasonal Farm Workers. GAO-03-605. Washington, D.C.: July 3, 2003. Decennial Census: Methods for Collecting and Reporting Hispanic Subgroup Data Need Refinement. GAO-03-228. Washington, D.C.: January 17, 2003. Decennial Census: Methods for Collecting and Reporting Data on the Homeless and Others without Conventional Housing Need Refinement. GAO-03-227. Washington, D.C.: January 17, 2003. Framework for Assessing the Acquisition Function at Federal Agencies. GAO-05-218G. Washington, D.C.: September 2005. Human Capital: Selected Agencies Have Opportunities to Enhance Existing Succession Planning and Management Efforts. GAO-05-585. Washington, D.C.: June 30, 2005. Homeland Security: Successes and Challenges in DHS’s Efforts to Create an Effective Acquisition Organization. GAO-05-179. Washington, D.C.: March 29, 2005. Transportation Security Administration: High-Level Attention Needed to Strengthen Acquisition Function. GAO-04-544. Washington, D.C.: May 28, 2004. Federal Procurement: Spending and Workforce Trends. GAO-03-443. Washington, D.C.: April 30, 2003. Acquisition Workforce: Status of Agency Efforts to Address Future Needs. GAO-03-55. Washington, D.C.: December 18, 2002.
For the 2010 Census, the U.S. Census Bureau (Bureau) is making the most extensive use of contractors in its history to supply a number of mission-critical functions and technologies. Because of the critical role that contractors will play in the 2010 Census, GAO reviewed the Bureau's acquisition planning process. Specifically GAO's objectives were to (1) determine the status of the Bureau's major decennial contracts, and (2) evaluate the extent to which the Bureau is using selected leading practices to manage its acquisition planning for these contracts. The Bureau has awarded three of its seven major decennial contracts consistent with their award date, but has changed the award dates of two of the remaining contracts (data dissemination and communications) because of changes in its acquisition approach. Bureau officials noted that the communications contract is currently on track. Still, changes in contract milestones--coupled with the Bureau's tight systems development schedule and interdependence of those systems--could affect the Bureau's ability to develop fully functional and sufficiently mature systems to be tested in concert with other operations during the 2008 Dress Rehearsal for the 2010 Census. Already, aspects of the Bureau's data dissemination system will not be assessed during the dress rehearsal because of changes to solicitation and contract award dates. To date, the Bureau has generally followed five selected leading practices for federal acquisition planning that we evaluated. For example, the Bureau has monitored the acquisition planning process for individual contracts, involved relevant stakeholders in the planning phase, and implemented certain actions to its business processes resulting from its reliance on contractors. However, as part of its strategic planning, the Bureau does not have a schedule for documenting what and when information needs to be provided to development teams to integrate all decennial systems. Additionally, in planning for its decennial acquisition workforce--which includes staff who award or manage contracts--the Bureau has not fully implemented key strategic workforce planning principles. For example, while the Bureau took steps at the division level to plan for its acquisition workforce, it does not assess or monitor at a high level gaps in the skills needed by its decennial acquisition workforce. The Bureau also has not identified the needs of the decennial acquisition workforce in its human capital management plan and did not involve all relevant acquisition workforce stakeholders in the development of this plan.
The Centers for Disease Control and Prevention (CDC) is the federal agency primarily responsible for monitoring the incidence of foodborne illness in the United States. In collaboration with state and local health departments and other federal agencies, CDC investigates outbreaks of foodborne illnesses and supports disease surveillance, research, prevention efforts, and training related to foodborne illnesses. CDC coordinates its activities concerning the safety of the food supply with the Food and Drug Administration (FDA) in the Department of Health and Human Services and those concerning the safety of meat, poultry, and eggs with the Food Safety and Inspection Service (FSIS) in the U.S. Department of Agriculture (USDA). FDA and FSIS, which are the primary federal agencies responsible for overseeing the safety of the food supply, maintain liaison with CDC in Atlanta, Georgia. CDC monitors individual cases of illness from harmful bacteria, viruses, chemicals, and parasites (hereafter referred to collectively as pathogens) that are known to be transmitted by foods, as well as foodborne outbreaks, through reports from state and local health departments, FDA, and FSIS. CDC does not have the authority to require states to report data on foodborne illnesses. In practice, each state determines which diseases it will routinely report to CDC. In addition, state laboratories voluntarily report the number of positive test results for several diseases that CDC has chosen to monitor. However, these reports do not identify the source of infection and are not limited to cases of foodborne illness. CDC also investigates a limited number of more severe or unusual outbreaks when state authorities request assistance. (For a description of the data that CDC relies on to monitor foodborne illnesses, see app. I.) At least 30 pathogens are associated with foodborne illnesses. For reporting purposes, CDC categorizes the causes of outbreaks of foodborne illnesses as bacterial, chemical, viral, parasitic, or unknown pathogens. (See app. II. for information on these pathogens and the illnesses they cause.) Although many people associate foodborne illnesses primarily with meat, poultry, eggs, and seafood products, many other foods, including milk, cheese, ice cream, orange and apple juices, cantaloupes, and vegetables, have also been involved in outbreaks during the last decade. Bacterial pathogens are the most commonly identified cause of outbreaks of foodborne illnesses. Bacterial pathogens can be easily transmitted and can multiply rapidly in food, making them difficult to control. CDC has targeted four of them—E. coli O157:H7, Salmonella Enteritidis, Listeria monocytogenes, and Campylobacter jejuni—as those of greatest concern. (See app. III.) CDC is also concerned about other bacterial pathogens, such as Vibrio vulnificus and Yersinia enterocolitica, which can cause serious illnesses, and Clostridium perfringens and Staphylococcus aureus, which cause less serious illnesses but are very common. The chemical causes of foodborne illnesses are primarily natural toxins that occur in fish or other foods but also include heavy metals, such as copper and cadmium. Viral pathogens are often transmitted by infected food handlers or through contact with sewage. Only a few viral pathogens, such as the Hepatitis A and Norwalk viruses, have been proven to cause foodborne illnesses. Finally, parasitic pathogens, such as Trichinella—found in undercooked or raw pork—multiply only in host animals, not in food. CDC officials believe that viral and parasitic pathogens are less likely than bacterial pathogens to be identified as the source of an outbreak of foodborne illness because their presence is more difficult to detect. The existing data on the extent of foodborne illnesses have weaknesses and may not fully depict the extent of the problem. Public health experts believe that the majority of cases of foodborne illness are not reported because the initial symptoms of most foodborne illnesses are not severe enough to warrant medical attention, the medical facility or state does not report such cases, or the illness is not recognized as foodborne. However, according to the best available estimates, based largely on CDC’s data, millions of people become sick from contaminated food each year, and several thousand die. In addition, public health and food safety officials believe that the risk of foodborne illnesses is increasing for several reasons. For example, as a result of large-scale food production and broad distribution of products, those products that may be contaminated can reach a great number of people in many locations. Furthermore, new and more virulent strains of previously identified harmful bacteria have been identified in the past several decades. Also, mishandling or improper preparation can further increase the risk. Between 6.5 million and 81 million cases of foodborne illness and as many as 9,100 related deaths occur each year, according to the estimates provided by several studies conducted over the past 10 years. Table 1 shows the range of estimates from four studies cited by food safety experts as among the best available estimates on the subject. The table also identifies the data on which these estimates are based. While various foods have been implicated as vehicles for pathogens in foodborne illnesses and related deaths, the available data do not allow a precise breakdown by specific foods. In general, animal foods—beef, pork, poultry, seafood, milk, and eggs—are more frequently identified as the source of outbreaks in the United States than non-animal foods. USDA, which regulates meat and poultry products, has estimated that over half of all foodborne illnesses and deaths are caused by contaminated meat and poultry products. The wide range in the estimated number of foodborne illnesses and related deaths is due primarily to the considerable uncertainty about the number of cases that are never reported to CDC and the methodology used to make the estimate. Public health and food safety officials believe that many of these illnesses are not reported because the episodes are mild and do not require medical treatment. For example, CDC officials believe that many intestinal illnesses that are commonly referred to as the stomach flu are caused by foodborne pathogens. According to these officials, people do not usually associate these illnesses with food because the onset of symptoms occurs 2 or more days after the contaminated food was eaten. In other cases, a foodborne illness may contribute to the death of an already ill person. In these cases, a foodborne illness may not be reported as the cause of death. In the absence of more complete reporting, researchers can only broadly estimate the number of illnesses and related deaths. Furthermore, most physicians and health professionals treat patients who have diarrhea without ever identifying the specific cause of the illness. In severe or persistent cases, a laboratory test may be ordered to identify the responsible pathogen. However, some laboratories may not have the ability to identify a given pathogen. Finally, physicians may not associate the symptoms they observe with a pathogen that they are required to report to the state or local health authorities. For example, a CDC official cited a Nevada outbreak in which no illnesses from E. coli O157:H7 had been reported to health officials, despite a requirement that physicians report such cases to the state health department. Nevertheless, 58 illnesses from this outbreak were identified after public service announcements alerted the public and health professionals that contaminated hamburger had been shipped to restaurants in a specific area of the state. Food safety and public health officials believe that the risk of foodborne illnesses is increasing. Several factors contribute to this increased risk. First, the food supply is changing in ways that can promote foodborne illnesses. For example, as a result of modern animal husbandry techniques, such as crowding a large number of animals together, the pathogens that can cause foodborne illnesses in humans can spread throughout the herd. Because of broad distribution, contaminated products can reach individuals in more locations. Mishandling of food can also lead to contamination. For example, leaving perishable foods at room temperature increases the likelihood of bacterial growth, and improper preparation, such as undercooking, reduces the likelihood that bacteria will be killed and can further increase the risk of illness. There are no comprehensive data to explain at what point pathogens are introduced into foods. Knowledgeable experts believe that although illnesses and deaths often result after improper handling and preparation, the pathogens were, in many cases, already present at the processing stage. Furthermore, the pathogens found on meat and poultry products may have arrived on the live animals. Second, because of demographic changes, more people are at greater risk of contracting a foodborne illness. Certain populations are at greater risk for these illnesses: people with suppressed immune systems, children, and the elderly. In addition, children are more at risk because group settings, such as day care centers, increase the likelihood of person-to-person transmission of pathogens. The number of children in these settings is increasing, as is the number in other high-risk groups, according to CDC. Third, three of the four pathogens CDC considers the most important were unrecognized as causes of foodborne illness 20 years ago—Campylobacter, Listeria, and E. coli O157:H7. Fourth, bacteria already recognized as sources of foodborne illnesses have found new modes of transmission. While many illnesses from E. coli O157:H7 occur from eating insufficiently cooked hamburger, these bacteria have also been found more recently in other foods, such as salami, raw milk, apple cider, and lettuce. Other bacteria associated with contaminated meat and poultry, such as Salmonella, have also been found in foods that the public does not usually consider to be a potential source of illness, such as ice cream, tomatoes, melons, alfalfa sprouts, and orange juice. Fifth, some pathogens are far more resistant than expected to long-standing food-processing and storage techniques previously believed to provide some protection against the growth of bacteria. For example, some bacterial pathogens, such as Yersinia and Listeria, can continue to grow in food under refrigeration. Finally, according to CDC officials, virulent strains of well-known bacteria have continued to emerge. For example, one such pathogen, E. coli O104:H21, is another potentially deadly strain of E. coli. In 1994, CDC found this new strain in milk from a Montana dairy. While foodborne illnesses are often temporary, they can also result in more serious illnesses requiring hospitalization, long-term disability, and death. Although the overall cost of foodborne illnesses is not known, two recent estimates place some of the costs in the range of $5.6 billion to more than $22 billion per year. The first estimate, covering only the portion related to the medical costs and productivity losses of seven specific pathogens, places the costs in the range of $5.6 billion to $9.4 billion. The second, covering only the value of avoiding deaths from five specific pathogens, places the costs in the range of $6.6 billion to $22 billion. While foodborne illnesses are often brief and do not require medical treatment, they can also result in more serious illnesses and death. In a small percentage of cases, foodborne infections spread through the bloodstream to other organs, resulting in serious long-term disability or even death. Serious complications can also result when diarrhetic infections resulting from foodborne pathogens act as a triggering mechanism in susceptible individuals, causing an illness such as reactive arthritis to flare up. In other cases, no immediate symptoms may appear, but serious consequences may eventually develop. The likelihood of serious complications is unknown, but some experts estimate that about 2 to 3 percent of all cases of foodborne illness lead to serious consequences. For example: E. coli O157:H7 can cause kidney failure in young children and infants and is most commonly transmitted to humans through the consumption of undercooked ground beef. The largest reported outbreak in North America occurred in 1993 and affected over 700 people, including many children who ate undercooked hamburgers at a fast food restaurant chain. Fifty-five patients, including four children who died, developed a severe disease, Hemolytic Uremic Syndrome, which is characterized by kidney failure. Salmonella can lead to reactive arthritis, serious infections, and deaths. In recent years, outbreaks have been caused by the consumption of many different foods of animal origin, including beef, poultry, eggs, milk and dairy products, and pork. The largest outbreak, occurring in the Chicago area in 1985, involved over 16,000 laboratory-confirmed cases and an estimated 200,000 total cases. Some of these cases resulted in reactive arthritis. For example, one institution that treated 565 patients from this outbreak confirmed that 13 patients had developed reactive arthritis after consuming contaminated milk. In addition, 14 deaths may have been associated with this outbreak. Listeria can cause meningitis and stillbirths and has a fatality rate of 20 to 40 percent. All foods may contain these bacteria, particularly poultry and dairy products. Illnesses from this pathogen occur mostly in single cases rather than in outbreaks. The largest outbreak in North America occurred in 1985 in Los Angeles, largely in pregnant women and their fetuses. More than 140 cases of illness were reported, including at least 13 cases of meningitis. At least 48 deaths, including 20 stillbirths or miscarriages, were attributed to the outbreak. Soft cheese produced in a contaminated factory environment was confirmed as the source. Campylobacter may be the most common precipitating factor for Guillain-Barre syndrome, which is now one of the leading cause of paralysis from disease in the United States. Campylobacter infections occur in all age groups, with the greatest incidence in children under 1 year of age. The vast majority of cases occur individually, primarily from poultry, not during outbreaks. Researchers estimate that 4,250 cases of Guillain-Barre syndrome occur each year and that about 425 to 1,275 of these cases are preceded by Campylobacter infections. While the overall annual cost of foodborne illnesses is unknown, the studies we reviewed estimate that it is in the billions of dollars. The range of estimates among the studies is wide, however, principally because of uncertainty about the number of cases of foodborne illness and related deaths. (See app. IV.) Other differences stem from the differences in the analytical approach used to prepare the estimate. Some economists attempt to estimate the costs related to medical treatment and lost wages (the cost-of-illness method); others attempt to estimate the value of reducing the incidence of illness or loss of life (the willingness-to-pay method). Two recent estimates demonstrate these differences in analytical approach. In the first, USDA’s Economic Research Service (ERS) used the cost-of-illness approach to estimate that the 1993 medical costs and losses in productivity resulting from seven major foodborne pathogens ranged between $5.6 billion and $9.4 billion. Of these costs, $2.3 billion to $4.3 billion were the estimated medical costs for the treatment of acute and chronic illnesses, and $3.3 billion to $5.1 billion were the productivity losses from the long-term effects of foodborne illnesses. Medical expenses ranged from more modest expenses for routine doctors’ visits and laboratory tests to more substantial expenses for hospital rooms and kidney transplants. Productivity losses included expenses such as lost wages from long-term disabilities and deaths caused by foodborne illnesses. Table 2 provides information on the costs associated with each of the seven pathogens. CDC, FDA, and ERS economists stated that these estimates may be low for several reasons. First, the cost-of-illness approach generates low values for reducing health risks to children and the elderly because these groups have low earnings and hence low productivity losses. Second, this approach does not recognize the value that individuals may place on (and pay for) feeling healthy, avoiding pain, or using their free time. In addition, not all of the 30 pathogens associated with foodborne illnesses were included. In the second analysis, ERS used the willingness-to-pay method to estimate the value of preventing deaths for five of the seven major pathogens (included in the first analysis) at $6.6 billion to $22.0 billion in 1992. The estimate’s range reflected the range in the estimated number of deaths, 1,646 to 3,144, and the range in the estimated value of preventing a death, $4 million to $7 million. Although these estimated values were higher than those resulting from the first approach, they may have also understated the economic cost of foodborne illnesses because they did not include an estimate of the value of preventing nonfatal illnesses and included only five of the seven major pathogens included in the first analysis. While current data indicate that the risk of foodborne illnesses is significant, public health and food safety officials believe that these data do not identify the level of risk, the sources of contamination, and the populations most at risk in sufficient detail. More uniform and comprehensive data on the number and causes of foodborne illnesses could form the basis of more effective control strategies. Beginning in 1995, federal and state agencies took steps to collect such data in five areas across the country. While this effort will provide additional data, CDC officials believe that collecting data at more locations and for other pathogens would provide even more representative data and identify more causes of foodborne illnesses. According to public health and food safety officials, the current voluntary reporting system does not provide sufficient data on the prevalence and sources of foodborne illnesses. There are no specific national requirements for reporting on foodborne pathogens. According to CDC, states do not (1) report on all pathogens of concern, (2) usually identify whether food was the source of the illness, or (3) identify many of the outbreaks or individual cases of foodborne illness that occur. Consequently, according to CDC, FDA, and FSIS, public health officials cannot precisely determine the level of risk from known pathogens or be certain that they can detect the existence and spread of new pathogens in a timely manner. They also cannot identify all factors that put the public at risk or all types of food or situations in which microbial contamination is likely to occur. Finally, without better data, regulators cannot assess the effectiveness of their efforts to control the level of pathogens in food. According to public health and food safety officials, a better system for monitoring the extent of foodborne illnesses would actively seek out specific cases. Such a system would require outreach to physicians and clinical laboratories. CDC demonstrated the effectiveness of such an outreach effort when it conducted a long-term study, initiated in 1986, to determine the number of cases of illness caused by Listeria. This study showed that a lower rate of illness caused by Listeria occurred between 1989 and 1993 during the implementation of food safety programs designed to reduce the prevalence of Listeria in food. In July 1995, CDC, FDA, and FSIS began a comprehensive effort to track the major bacterial pathogens that cause foodborne illnesses. These agencies are collaborating with state health departments in five areas across the country to better determine the incidence of infection with Salmonella and E. coli O157:H7 and other foodborne bacteria and to identify these sources of diarrheal illness from Salmonella and E. coli O157:H7. Initially, FDA provided $378,000 and FSIS provided $500,000 through CDC to the five locations for 6 months. The agencies believe that this effort should be a permanent part of a sound public health system. For fiscal year 1996, FSIS is providing $1 million and FDA is providing $300,000. CDC provides overall management and coordination and facilitates the development of technical expertise at the sites through its established relationships with the state health departments. The project consists of three parts: a survey of the local population in the five locations and interviews with local health professionals to estimate the number of diarrheal illnesses and determine the number of illnesses for which medical attention was sought and laboratory samples were taken; a survey of laboratories to determine the microbiological testing procedures and processes used to identify foodborne illnesses and an audit of the participating laboratories’ test results to determine what proportion of cases were detected; and statistical studies to determine, among other things, the risks associated with different foods. CDC and the five sites will use the information to identify emerging foodborne pathogens and monitor the incidence of foodborne illness. FSIS will use the data to evaluate the effectiveness of new food safety programs and regulations to reduce foodborne pathogens in meat and poultry and assist in future program development. FDA will use the data to evaluate its efforts to reduce foodborne pathogens in seafood, dairy products, fruit, and vegetables. According to CDC, FDA, and FSIS officials, such projects must collect data over a number of years to identify national trends and evaluate the effectiveness of strategies to control pathogens in food. Funding was decreased slightly for this project in 1996, and these officials are concerned about the continuing availability of funding, in this era of budget constraints, to conduct this discretionary effort over the longer term. We provided copies of a draft of this report to CDC, FSIS, and FDA for their review and comment. We met with the Director, Division of Bacterial and Mycotic Diseases, CDC; the Associate Administrator, FSIS; and other relevant officials from both agencies. These officials generally agreed with the information discussed and provided some clarifying comments that we incorporated into the report. FDA’s Office of Legislative Affairs notified us that FDA generally agreed with the contents of the report and provided several technical comments that we incorporated. To conduct this review, we spoke with, and obtained studies, data, and other information on foodborne illnesses from, officials at CDC, ERS, FDA, and FSIS. We met with these officials at their headquarters in Atlanta, Georgia, and Washington, D.C. To examine the frequency of foodborne illness, we met with agency officials to identify and discuss the most widely recognized studies on the incidence of foodborne illness in the United States and obtained documentation. To examine the health consequences of foodborne illnesses, we relied primarily on discussions with medical experts at CDC and articles that have appeared in professional journals obtained from CDC officials and our literature review. To examine the economic impacts of foodborne illnesses, we reviewed the analytical approaches used to estimate the costs of foodborne illnesses and recent examples of such estimates and spoke with economists at CDC, ERS, and FDA. To examine the adequacy of knowledge about foodborne illnesses to develop effective control strategies, we spoke with the project managers from CDC, FDA, and FSIS and other agency officials associated with a joint effort with five state health departments recently undertaken to improve their knowledge about foodborne illnesses and collected agency documents. We reviewed but did not independently verify the accuracy of the data available on the number of reported cases of foodborne illness, the overall estimates of incidence, or the estimates of costs from specific pathogens because this effort would have required the verification of multiple databases and other information from state and federal agencies and other sources. This verification process would have required a large commitment of additional resources. We did not review data on the incidence of foodborne illness in other countries because comparable data were not readily available and the data that are available have some of the same limitations as the data on U.S. foodborne illnesses. We conducted our review from June 1995 through April 1996 in accordance with generally accepted government auditing standards. We are sending this report to you because of your role in overseeing the activities and funding of the agencies responsible for the issues discussed. If you or your staff have any questions about this report, I can be reached at (202) 512-5138. Major contributors to this report are listed in appendix V. To monitor, control, and prevent foodborne illnesses, the Centers for Disease Control and Prevention (CDC) relies primarily on four types of data from local and state health departments, according to CDC officials. These four types of data are shown in table I.1. Reported annually for most outbreaks (more frequently for outbreaks of E. coli O157:H7 and Salmonella Enteritidis) As table I.1 notes, each type of data has limitations, particularly the outbreak and laboratory data, which have been CDC’s primary monitoring tools. More specifically, in about half of the outbreaks as shown in figure I.1, the data do not identify the agent that caused the outbreak. Furthermore, these data generally do not provide information about the cause of a new trend. One or more factors can account for a new trend: a change in consumption behavior, such as a preference for turkey over red meat; a reporting bias, such as an increase in the number of laboratories testing for the disease; or a change in the nature of the disease, such as the emergence of a new strain. Finally, there is a delay from the time these data are reported to CDC until they are compiled into annual summaries. At the time of our review, complete annual summaries of data were only available through 1991. Furthermore, CDC’s laboratory data, from its Public Health Laboratory Information System, represent only a fraction of the cases of illnesses that occur from four pathogens that CDC tracks. For example, only one confirmed case of infection was cited in the laboratory data that the Georgia Health Department reported to CDC during an outbreak caused by contaminated ice cream products in 1994. However, on the basis of a survey of home delivery customers that it conducted, CDC estimated that 11,404 cases occurred in Georgia alone (products were distributed in 48 states). Finally, these data do not include information about the source of the illness. In addition to its program activities to monitor, control, and prevent foodborne illnesses, CDC collects national data on a range of pathogens and illnesses from a variety of data sources. These sources include the National Notifiable Diseases Surveillance System, the National Hospital Discharge Survey, the National Ambulatory Medical Care Survey, the National Health Interview Survey, and the National Vital Statistics System. Researchers use these data to estimate the number of foodborne illnesses, their severity, and their costs. But these data have major limitations for understanding foodborne illnesses, primarily because they rarely identify the specific pathogen or indicate the method of transmission. For example, illnesses, such as those caused by E.coli O157:H7, cannot always be distinguished from other similar illnesses. Researchers may supplement national data with data from health maintenance organizations or community health studies. Such studies provide more detailed information about foodborne illnesses but are limited to small samples and have only been done occasionally. Although foodborne illnesses are often short term and do not require medical treatment, in some cases, these illnesses can involve other organs, resulting in serious complications. In other cases, foodborne illnesses may not result in immediate symptoms but ultimately may produce serious health problems. CDC has classified the causes of foodborne illnesses into the following four categories: Bacterial pathogens are microorganisms that can be seen with a microscope but not with the naked eye. Some bacterial pathogens are infectious themselves or can produce toxins. Furthermore, bacteria can multiply rapidly in food, making them difficult to control and can be transmitted through person-to-person contact. Some bacteria, such as Clostridium botulinum, which causes botulism, can form spores in food that can resist some food preservation treatments, including boiling. Chemical agents are primarily naturally occurring toxins that can enter the food supply. Paralytic shellfish poisoning and mushroom poisoning are caused by such chemicals. Heavy metals—such as cadmium, copper, iron, tin, and zinc—are also included in this category. These pathogens can cause a variety of gastrointestinal, neurologic, respiratory, and other symptoms. Viral pathogens are too small to be seen with a conventional microscope. Only a few viral pathogens, such as the Hepatitis A and Norwalk viruses, have been proven to cause foodborne illnesses. Viral pathogens are often transmitted by infected food handlers or through contact with sewage. Parasitic pathogens are larger than bacterial pathogens and include protozoa (one-celled microorganisms) and multicelled parasites. They multiply only in host animals, not in food. Protozoa form cysts that are similar to spores but less resistant to heat. Cysts can be transmitted to new hosts through food that has been eaten. Multicelled parasites, such as Trichinella spiralis, which causes trichinosis, occur in microscopic forms, such as eggs and larvae. Thorough cooking will destroy larvae. While the likelihood of serious complications from foodborne illnesses is unknown, some researchers estimate that about 2 to 3 percent of all cases of foodborne illness lead to serious consequences. Although anyone can suffer from foodborne illnesses, certain populations are more at risk from them or their complications than others: pregnant women, children, those with compromised or suppressed immune systems, and the elderly. These groups are more at risk because of altered, underdeveloped, damaged, or weakened immune systems. Table II.1 provides information on several foodborne pathogens, the serious complications they may result in, and some of the foods in which they have been found. In 1990, the Public Health Service identified E. coli O157:H7, Salmonella, Listeria monocytogenes and Campylobacter jejuni as the four most important foodborne pathogens in the United States because of the severity and the estimated number of illnesses they cause. According to CDC officials, illnesses caused by E. coli O157:H7 and Listeria monocytogenes are generally more deadly than illnesses caused by other foodborne pathogens. In contrast, illnesses caused by Salmonella and Campylobacter jejuni are less likely to be deadly but are more common. This appendix discusses the estimated number of cases of foodborne illness caused by these pathogens. E. coli O157:H7 has emerged as an important cause of outbreaks of foodborne illness in the United States since 1982. (See fig. III.1). Because few laboratories in the United States routinely test for E. coli O157:H7, the actual number of illnesses caused by this pathogen is unknown, but CDC officials estimate that this pathogen causes approximately 21,000 illnesses annually. As shown in figure III.2, only 33 states required reporting of such illnesses through the end of 1994, according to information provided by CDC. Figure III.3 provides estimates of the percentage of people who recover, remain ill, or die from E. coli O157:H7. On the basis of population-based studies, CDC officials estimate that between 800,000 and 4 million illnesses from the more than 2,000 strains of Salmonella occur each year in the United States. In 1994, one strain, Salmonella Enteritidis, accounted for more than 25 percent of all reported infections from Salmonella. Confirmed laboratory reports of the Salmonella Enteritidis strain increased from 3,322 to 10,009 between 1982 to 1994. While the number of outbreaks from Salmonella Enteritidis has declined since 1989, over 5,000 people, more than in any other year, became ill from the 44 outbreaks reported in 1994. Figure III.4 shows the estimated percentage of people who recover or die from all strains of Salmonella. CDC estimates that the number of illnesses and deaths caused by Listeria monocytogenes declined between 1989 and 1993, from 1,965 cases and 481 deaths to 1,092 cases and 248 deaths. CDC attributes this downward trend to prevention efforts implemented by the food industry and regulatory agencies. Figure III.5 shows the estimated percentages of people who recover, remain ill, or die from Listeria monocytogenes. According to CDC, Campylobacter jejuni is the most common bacterial cause of diarrhea in the industrialized world. An estimated 2 million to 4 million cases occur each year in the United States, according to population-based studies. Although the number of Campylobacter jejuni cases confirmed by laboratory reports represents only a small proportion of the total number of illnesses that are estimated to occur from Campylobacter jejuni, the reported number more than doubled from 3,947 in 1982 to 7,970 in 1989. Most cases of illness occur sporadically and not as part of an outbreak. Illness can occur from contact with raw foods (often poultry) during food preparation. Figure III.6 shows the estimate of the percentage of people who recover or die from Campylobacter jejuni. This appendix provides information on the cost of foodborne illnesses using both the cost-of-illness and the willingness-to-pay methods. The range of estimates is wide, however, principally because of uncertainty over the number of cases of foodborne illness and deaths. Table IV.1 provides the estimated number of illnesses and deaths in 1993 used to calculate the cost-of-illness estimate. As the table indicates, food was the most frequent source of contamination for five of the seven pathogens the U.S. Department of Agriculture’s (USDA) Economic Research Service examined. CDC has targeted four of these seven pathogens as the most threatening foodborne pathogens. Table IV.2 presents cost-of-illness estimates for all foodborne illnesses and illnesses from meat and poultry. Contaminated meat and poultry are believed to be among the most common sources of foodborne illness from these pathogens. ERS also used the willingness-to-pay method to estimate the value of preventing deaths for five of the seven major pathogens. The results of this analysis are shown in table IV.3. Edward M. Zadjura, Assistant Director Jay Cherlow, Assistant Director for Economic Analysis Daniel F. Alspaugh, Project Leader Carol Herrnstadt Shulman Jonathan M. Silverman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the extent of foodborne illnesses caused by microbal contamination, focusing on: (1) the frequency, health consequences, and economic impacts of these illnesses; and (2) the extent of information available to develop effective control strategies. GAO found that: (1) between 6.5 million and 81 million cases of foodborne illness and as many as 9,100 related deaths occur each year; (2) the risk of foodborne illness is increasing due to changes in food supply and consumption, recognition of new causes of foodborne illnesses, new modes of transmission, increased resistance to long-standing food-processing and storage techniques, and emerging virulent strains of well-known bacteria; (3) while foodborne illnesses are most often brief and do not require medical care, a small percentage cause long-term disability or even death; (4) foodborne illness may cost billions of dollars every year in medical costs and lost productivity; (5) the current voluntary reporting system does not provide sufficient data on the prevalence and sources of foodborne illnesses; (6) efforts are under way to collect more and better data on the prevalence and sources of foodborne illnesses; and (7) more uniform and comprehensive data on the number and causes of foodborne illnesses could lead to more effective control strategies.
U.S. passports are official documents that are used to demonstrate the bearer’s identity and citizenship for international travel and reentry into the United States. Under U.S. law, the Secretary of State has the authority to issue passports, which may be valid for up to 10 years. Only U.S. nationals may obtain a U.S. passport, and evidence of citizenship or nationality is required with every passport application. Federal regulations list disqualifying situations under which U.S. citizens are not eligible for a passport, such as those who are subjects of a federal felony arrest warrant. The security of passports and the ability to prevent and detect their fraudulent use are dependent upon a combination of well-designed security features, solid issuance procedures for the acceptance and adjudication of the application and the production of the document, and inspection procedures that utilize the available security features of the document. A well-designed document has limited utility if it is not well produced or if inspectors do not utilize the security features to verify the authenticity of the document. In 2005, State began issuing e-passports, which introduced an enhanced design and physical security features. GPO manufactures blank e-passport booklets for State using a variety of materials from different suppliers. Currently, GPO has two suppliers—Infineon and Gemalto—under contract for the covers of the e-passports. These covers include the computer chip embedded in the back cover that can communicate using contactless ID technology. Security-minded versions of this technology are employed in contactless smart cards used in applications such as automatic banking and identification. As of February 1, 2009, the State Department had issued over 30 million e-passports. To combat document fraud, security features are used in a wide variety of documents, including currency, identification documents, and bank checks. Security features are used to prevent or deter fraudulent alteration or counterfeiting of such documents. In some cases, an altered or counterfeit document can be detected because it does not have the look and feel of a genuine document. For instance, in U.S. passports, detailed designs and figures are used with specific fonts and colors. While these features are not specifically designed to prevent the use of altered or counterfeit documents, inspectors can often use them to identify nongenuine documents. Security features of travel documents are assessed by their capacity to secure a travel document against the following threats: counterfeiting—unauthorized construction or reproduction of a travel forgery—fraudulent alteration of a travel document, and impostors—use of a legitimate travel document by people falsely representing themselves as legitimate document holders While security features can be assessed by their individual ability to help prevent the fraudulent use of the document, it is more useful to consider the entire document design and how all of the security features help to accomplish this task. Layered security features tend to provide improved security by minimizing the risk that the compromise of any individual feature of the document will allow for unfettered fraudulent use of the document. While most security features in the U.S. e-passport are physical features, the introduction of the computer chip also allows for the use of electronic security features. In general, at ports of entry, travelers seeking admission to the United States must present themselves and a valid travel document, such as a passport, for inspection to a CBP officer. The immigration-related portion of the inspections process requires the officer to confirm the identity and determine the admissibility of the traveler by questioning the individual and inspecting the presented travel documents. In the first part of the inspection process—primary inspection—CBP officers inspect travelers and their travel documents to determine whether they should be admitted or referred for further questioning and document examination. If additional review is necessary, the traveler is referred to secondary inspection—in an area away from the primary inspection area—where another officer makes a final determination to admit the traveler or deny admission for reasons such as the presentation of a fraudulent or counterfeit passport. The chips used in the U.S. e-passports are integrated circuits (IC) that are essentially complete computers that contain a central processing unit, various types of memory, and other components that perform specialized functions such as random number generation and advanced cryptographic processing. The chips contain both hardware and software. The hardware circuitry and the operating system are implanted into the various layers of the chip in a process called photolithography, which employs a technique called masking wherein the chip’s circuitry is defined on a series of glass plates called the photomask. The photomask is used as a template to transfer the pattern of the chip’s electronic components into the various layers of the physical chip. Once implanted, the circuitry is considered permanent and not changeable except through physical attack. While the chip’s operating system is implanted into the chip through the photomask at chip creation time, other software needed on the chip—for example, the traveler data—are written to the chip later, during personalization of the chip. The e-passports are designed as contactless proximity cards, and communication with the embedded chip is only via a radio frequency (RF) link established according to standard methods with a device generally called a reader. To support global acceptance and interoperability of e-passports, ICAO issued standards that define how data are to be stored on and read from e-passports, including the RF communications. According to the ICAO standards, contactless communication with the e-passport is governed by ISO/IEC 14443, an international standard that defines the transmission protocol used to transfer data between the reader and the chip. Higher-level reading from and writing to the chip is implemented through the ISO/IEC 7816-4 command set. ISO 7816-4 is an international standard set of commands used to communicate with the chip and to control all reading from and writing to the chip based on a strict command/ response scheme. The reader initiates all commands to the chip and the chip provides the expected response. The chip itself cannot initiate any communications with the reader. ISO 7816-4 includes controls to limit read and write access to the chip to authorized parties. The United States issues e-passports with both ISO/IEC type A and type B interface connections. Both types use the same transmission protocol, but vary in how communications are established between the chip and the reader and in how information is encoded for transmission. The chip has no onboard power, but instead pulls the energy it needs from the electromagnetic field emitted by the reader. The e-passport antenna receives the electromagnetic energy from the reader and converts it to electric current to power the chip. The chip can be powered and communicate only when it is in close proximity—up to about 10 centimeters—to an appropriate reader. With both types of chips, the antenna is a component external to the chip and separately attached to it as part of the overall book cover manufacturing process. While the communication protocols and command set are standardized, the operating system and other software used on the chips are vendor- specific. As is typical with smart card ICs, the software on the e-passport chips is partitioned into three general areas: the IC dedicated software, the basic embedded software, and the application embedded software. The IC dedicated software contains software used for testing purposes and software to provide other services to facilitate usage of the hardware on the IC. The IC dedicated software is developed by the IC manufacturer and it is part of the photomasks of the chips. The basic embedded software is typically not provided by the chip manufacturer, but is usually developed by a third party and delivered to the chip manufacturer for incorporation into the chip’s photomask. An important component of the basic embedded software is the operating system for the chip. The operating system implements the ISO 7816-4 command set and controls all communication between the chip and the outside world. The third major partition of software on the chip is the application embedded software, which is also typically provided by a third party and provides functionality specific to the particular application for which the chip is intended to be used. In the case of the U.S. e-passports, the application software is data contained in a file layout using an open, ICAO- specified logical data structure used for machine-readable travel documents. In producing e-passport booklets for State, GPO has tapped into the existing global smart card industry, resulting in a wide number of different companies involved in the e-passport chip production and inlay process. Two separate companies were awarded contracts to supply chips for the U.S. e-passports. Infineon, a German company, fabricates its own chips and embeds a commercial operating system from a third-party company on them. Gemalto, a Dutch company, obtains chips from NXP, a Dutch semiconductor manufacturer. Gemalto provides NXP with its own operating system, which NXP embeds within the chip prior to shipping the chip to Gemalto. Although each of these contractors takes a different path to create and provide e-passport covers to GPO, both use a common subcontractor for attachment of the antenna to the chip and the inlaying of the chip into the back cover of the e-passport booklet. GPO itself finishes production of the e-passport booklet by inserting the paper pages into the covers, installing a metal strip down the inside spine for RF shielding, and, in a process termed pre-personalization, preparing the chip for use by the State Department. State personalizes the e-passport by printing bearer data onto the data page and writing digital data onto the chip as part of its issuance procedures. As seen in figure 1, several steps are involved in the production of an e-passport using Gemalto’s e-passport booklet. Gemalto involves several subcontractors to produce the cover before it is delivered to GPO. For instance, while the operating system software is created by Gemalto, it is implanted on the chip when it is fabricated by NXP. Companies overseas are also involved in the production of the chip and its incorporation into the e-passport cover. In pre-personalization, GPO tests and formats the chips, preparing them for personalization by State, and finishes overall construction of the e-passport booklet. GPO then ships the finished, blank e-passport books to the 21 State Department passport issuing offices around the country that then personalize and issue them to U.S. citizens, as needed. Similar to Gemalto’s production process, the production process at Infineon also involves several subcontractors to produce the booklet cover before it is delivered to GPO (see fig. 2). The operating system and other embedded software used on the Infineon chips are developed by a third- party company, and shipped to Infineon for incorporation into the photomask pattern. As with the Gemalto production process, GPO tests and pre-personalizes each chip, finishes the books, and distributes the finished, blank e-passport books to the 21 passport-issuing offices. Since 1997, GAO has identified federal information security as a high-risk area. Malicious code is one of the primary threats to federal information security. NIST defines malicious code—sometimes called malware—as “a program that is inserted into a system, usually covertly, with the intent of compromising the confidentiality, integrity, or availability of the victim’s data, applications, or operating system or of otherwise annoying or disrupting the victim.” Malicious code can be used for many purposes and come in many forms. For example, malicious code might be designed to delete files on a system or repeatedly attempt access to a system service and thus effectively shut it down. The effects of malicious code can range from performance degradation to compromise of mission-critical applications. Some common forms of malicious code include viruses, worms, and Trojan horses. Viruses infect a system by attaching themselves to host programs or data files. Worms are self-contained programs that can self-replicate and do not require human interaction to spread through a system or network. Trojan horses are nonreplicating programs that appear benign but are designed with a malicious purpose. Malicious code often takes advantage of vulnerabilities in a system’s software to either spread or execute. For example, a common vulnerability, known as a buffer overflow, redirects system control to a malicious program through badly designed software. Inadequate controls on a network’s connections or services are another common vulnerability that allows malicious code to spread. Common protections against malicious code include input checking at the boundaries of a system, such as at external interfaces to a system; network controls to lower the possibility that malicious code could spread within a system; and patch management to address vulnerabilities in the system’s software that malicious code can exploit. In general, a successful malicious code attack first requires that the malicious code get into a system. This can occur, for example, by inserting infected media into the computer or through incomplete controls on the system’s network connections. Second, the malicious code needs to spread to those areas of a system to which it wants to cause damage. Malicious code can spread in many ways, including various network protocols and services and also in simple file transfers. Finally, malicious code needs to be executed, often by taking advantage of vulnerabilities in a system’s software. Therefore, in the case of e-passports, a successful malicious code attack from the chip would first require that malicious code get on the chip. Second, that it get transferred from the chip onto agency computers during the e-passport inspection process and then spread to vulnerable areas within those systems. And, finally, the malicious code would have to be executed. Although communication with the chips is designed to be via the contactless ID interface that complies with the ISO 7816-4 standard, which includes an authentication procedure to limit read and write access to the chip to authorized parties, an alternate, illicit way data can be attempted to be read from or written to the chip is through physical tampering techniques. In general, the aim of such an attack is to discover confidential data stored on the chip—such as cryptographic keys—which can be used to open access to the chip via the contactless interface. Common Criteria is an international standard method for evaluating security features of information technology (IT) components. The U.S. portion of this effort is coordinated through a partnership of NIST and the National Security Agency (NSA) called the National Information Assurance Partnership (NIAP). It provides a framework for evaluating security features of IT components. The Common Criteria program evaluates commercial-off-the-shelf information assurance and information assurance-enabled products. These products can be items of hardware, software, or firmware. Evaluations are performed by accredited Common Criteria testing laboratories whose results are then certified by a validation body. A product is considered Common Criteria certified only after it is both evaluated by an accredited laboratory and validated by the validation body. Common Criteria certifications are expressed in a seven-step assurance scale called Evaluation Assurance Levels. The seven ordered levels provide an increasing measure of confidence in a product’s security functions. All evaluated products that receive a Common Criteria certificate appear on a validated products list, which is available on the Common Criteria Web site. To facilitate the efficient use of testing resources, an international agreement was developed under which one country’s Common Criteria certifications would be recognized by the other participating countries. This is intended to eliminate unnecessary duplication of testing efforts. Common Criteria certifications need to be carefully considered. We have reported previously that the fact that a product appears on the validated products list does not by itself mean that it is secure. A product’s listing on any Common Criteria validated products list means that the product was evaluated against its security claims and that it has met those claims. The extent to which vendor-certified claims provide sufficient security for a given application is another question. A complex environment has been established to provide reasonable assurance that the data contained on electronic passports can be used to help determine whether an individual should be admitted to the United States. The overall control environment depends on each party effectively implementing the controls that have been established to govern its operation and utilize the controls implemented by the other agencies. State uses a technology commonly referred to as public key cryptography to generate digital signatures on the data it writes to the computer chips on the e-passport. These digital signatures, when effectively implemented, can help provide reasonable assurance that integrity has been maintained over the data placed on the chip by State. Our review found that DHS has not implemented the capabilities needed to completely validate the digital signatures generated by State before relying on the data, which adversely affects its ability to obtain reasonable assurance that the electronic data provided in a chip were the same data that State wrote in the e-passport. While DHS has some controls that somewhat mitigate this weakness, it does little to ensure that altered or forged electronic data can be detected. Accordingly, until DHS implements this functionality, it will continue to lack reasonable assurance that data found on e-passport computer chips have not been fraudulently altered or counterfeited. ICAO has issued e-passport standards that have been adopted by the United States and other countries. As part of its specifications for e-passports, ICAO requires the use of digital signatures and a public key infrastructure to establish that the data contents of the computer chip are authentic and have not been changed since being written. A PKI—a system of hardware, software, policies, and people—is based on a sophisticated cryptographic technique known as public key cryptography. The use of a PKI for e-passports primarily serves to provide (1) data integrity (the electronic data placed on the passport have not been changed), and (2) authentication (the country issuing the e-passport was the source of the data). In its standards, ICAO specifies only the use of well-known cryptographic algorithms for use in e-passports. As discussed in appendix II, public key cryptography is used to generate and validate digital signatures. In particular, the “public key” is used to validate the digital signature that is used to authenticate the data being signed. However, a means is necessary for the user to reliably associate a particular public key with a document signer. The binding of a public key to a document signer is achieved using a digital certificate, which is an electronic credential that guarantees the association between a public key and a specific entity. In agreement with ICAO standards for e-passports, State generates and writes a digital signature on the chip of each e-passport during the personalization process. As illustrated in figure 3, State stores the following information on the e-passport computer chip: biographical information about the traveler, the traveler’s facial image, and security data. The biographical data and facial image are organized into data groups for storage on the e-passport. Each data group is condensed using a hashing algorithm and the resulting hash values are stored in the security data. A digital signature is generated on these hash values, which represent the data stored on the e-passport computer chip. Hence, the security data on an e-passport consist of three key elements: the data group hash values, the digital signature, and the certificate needed to validate the digital signature. This certificate—known as the document signer certificate—is associated with a digital signature on a U.S. e-passport’s data and is used to validate that the signed data contained in that passport were actually generated by State. The keys and certificates associated with U.S. e-passports are established in a hierarchical manner to establish a “chain of trust” that a third party, such as DHS, can use to obtain reasonable assurance that the data contained in the passport are the data that were actually written on to the e-passport by State. State has developed a comprehensive set of controls to govern the operation and management of the PKI that generates the digital signatures used to help assure the integrity of the passport data written to the chip. These controls include the development of policies and practices that are consistent with best practices described in federal guidelines. For example, State’s policies and procedures for generating and storing digital signatures and certificates from cryptographic modules minimize the risk of compromise or unauthorized disclosure. Further, State’s procedures require the use of cryptographic modules validated against the level 3 criteria of FIPS 140-2, which is consistent with federal best practices and requirements. If properly validated, the digital signatures on State’s e-passports should provide those reading the chip data, including DHS, reasonable assurance that the data stored on the chip were written by State and have not been altered. Proper validation includes verifying that the document signer certificate was issued by the State Department. In July 2007, we reported that DHS was not fully using a key security feature of the U.S. e-passport—namely the data stored on the chip. At that time, DHS had not fully deployed e-passport readers to all primary inspection lanes at all ports of entry and did not have a schedule to do so. We also reported that the implemented e-passport reader solution was not capable of validating e-passport digital signatures, which would help to ensure that the data written to the e-passport chips have not been altered. Since that time, while DHS has begun planning an acquisition for new e-passport readers, DHS has made no further deployments of e-passport readers, nor has it implemented a solution that would allow for the full verification of the digital signatures on e-passport computer chips. In 2006, as a part of the United States Visitor and Immigrant Status Indicator Technology (US-VISIT) system, DHS deployed 237 e-passport readers at 33 air ports of entry—212 are installed in primary inspection lanes and 25 are installed in training areas. No e-passport readers are deployed in secondary inspection areas. While these 33 air ports of entry were chosen because they process the largest volume of travelers—about 97 percent—from Visa Waiver Program countries, the majority of lanes at these airports do not have e-passport readers. Even though the same e-passport readers may be used to read U.S. e-passports, U.S. citizens are primarily processed through lanes at these air ports of entry that are not equipped with e-passport readers. At equipped primary inspection lanes, CBP officers can use e-passport readers to access the biographical information and digitized photograph stored on the e-passport chip. To read e-passports, officers place the biographical page of the e-passport on the reader’s glass plate. The reader then electronically scans the biographical information printed on the page and uses it to access the information stored in the e-passport’s chip. Once the biographical data and photograph from the chip are displayed on the primary inspection computer screen, the officer is to compare the information displayed with the information on the biographical page of the passport and verify that they match. The results of any validation activities conducted on the data by the system are also presented to the officer. Any mismatches could indicate fraud. While a total of 500 e-passport readers were purchased by the US-VISIT program. DHS has made no further deployments of e-passport readers since 2006. Those not deployed are in storage, used for training, or used to support system development activities. Following the deployment at the 33 air ports of entry in 2006, responsibility for deploying the e-passport readers was shifted from the US-VISIT program to CBP. CBP officials partially attributed the lack of progress in deploying e-passport readers to its failure to allocate funding for the activity since it assumed the responsibility from US-VISIT. According to DHS officials, the slower than expected times to read data from e-passport chips also influenced its decisions to not further the deployment of the e-passport readers. In 2008, DHS transferred $11.4 million of no-year funds from US-VISIT to CBP for planning, purchasing, and deploying e-passport readers at all CBP primary processing lanes and secondary inspection areas at the ports of entry. According to CBP officials, it is currently planning an acquisition for new e-passport readers. As a part of the acquisition planning, CBP also expects to determine whether it will replace the 500 currently deployed or stored e-passport readers with new readers that will likely have better performance than the current readers. According to DHS, CBP is planning an e-passport reader procurement that will allow for the full deployment of e-passport readers in fiscal year 2011. In our prior work, we recommended that DHS develop a deployment schedule for providing sufficient e-passport readers to U.S. ports of entry. With the identification of funding for the effort, CBP has initiated planning for further deployment of e-passport readers, but has not yet developed a deployment schedule. Until DHS installs e-passport readers in all inspection lanes, CBP officers will not be able to take advantage of the data stored on e-passport chips. For instance, without e-passport readers, CBP officers are unable to read the photograph and biographic information stored on the e-passport chip, information that would better enable officers to detect many forms of passport fraud, including impostors and the alteration or substitution of the photos and information printed in the passports, and help to determine the traveler’s identity and admissibility into the United States. While DHS’s systems conduct some validation activities to ensure the integrity of the data on the e-passport chip, it does not have adequate assurance that the data stored on the chip have not been changed since they were authored by a legitimate issuing authority—in the case of U.S. e-passports, the State Department. In primary inspection lanes that are equipped with e-passport readers, CBP’s workstations conduct a series of checks using data read from the e-passport computer chip, including the biographical data, the facial image, and the security data. First, the CBP workstation verifies that the biographical data read from the computer chip match that read from the printed biographical page. Second, the CBP workstation calculates the hash values of the data groups read from the computer chip and compares them with the hash values stored in the security data. If available, the CBP workstation will also use the digital certificate to verify the digital signature. The expiration date of the e-passport and the digital certificate are also checked. Finally, if the e-passport has been previously read by CBP, the hash value of the facial image is compared with the value stored by CBP. If this is the first time the e-passport has been encountered, the hash value is stored for future comparisons. Any mismatches are to result in an error being displayed to the CBP officer. Further, in October 2008, DHS began to make U.S. passport data available to CBP officers in primary inspection. DHS is now receiving U.S.-issued passport data through a datashare initiative with the Department of State. CBP has modified its workstations to retrieve this additional information when U.S. passports, including e-passports, are processed. When CBP officers enter U.S. passport data into appropriately configured CBP workstations, the photograph of the traveler, as issued by the State Department, will be displayed to the officer. As e-passports are issued by State, the corresponding information is made available to DHS through the datashare. State worked with DHS to transfer data on all valid historical U.S. passports. As more historical U.S. passport information becomes available, more photographs will be displayed to primary officers upon processing a U.S. citizen through primary inspection. However, the key step that is missing is that the CBP workstation does not validate the legitimacy of the public key used to verify the digital signature. Such a validation would provide assurance that the public key in the document signer certificate was generated by the State Department. Without this verification, CBP does not have reasonable assurance that the e-passport data being protected by the digital signature were written by the State Department because forgers or counterfeiters could simply generate the keys necessary to digitally sign the forged data and include their own certificate in the e-passport for verification purposes. Checking the legitimacy of the certificate containing the public key that is used in the digital signature validation process would effectively mitigate this risk. When generated, the document signer certificates are themselves digitally signed. However, CBP does not have access to the public keys necessary to validate these digital signatures. While DHS tested the functionality of storing and using this information to verify the certificates included by State and other nations on e-passports using the CBP workstation, the functionality was not implemented for operations because the infrastructure to collect and maintain the international certificate database did not exist. According to DHS officials, this function was a US-VISIT requirement, but did not get implemented, in part, because a DHS component that would be responsible for operating the public key database was never identified. DHS officials also stated that the slow performance of reading e-passports diminished the importance of implementing this function. Not being able to check the legitimacy of the document signer certificates affects not only CBP’s ability to verify the integrity and authenticity of the data written to U.S. e-passport computer chips, but also its ability to verify the integrity and authenticity of computer chip data on any country’s e-passport. The United States requires all 35 participants in the Visa Waiver Program to issue e-passports, and ICAO has estimated that over 50 countries issue e-passports. Because CBP does not have the necessary information to fully validate the digital signatures that these countries generate, it does not have reasonable assurance that data signed by those countries were actually generated by the authorized passport issuance agency for that country. Hence, it cannot ensure that the integrity of the data stored on the e-passport’s computer chip has been maintained. Two key issues need to be resolved for CBP to be able to rely on data stored on e-passport computer chips. First, a database needs to be established and populated with the digital certificates needed to fully validate the digital signatures that can be accessed by CBP inspection workstations at the ports of entry. An approach needs to be developed and implemented to populate the database with the needed information, including State Department data for U.S. e-passports, that can be used to fully validate the digital signatures. According to ICAO, this information should be distributed only through secure diplomatic channels. Second, CBP needs to develop and implement functionality on its inspection workstations to access the database when e-passport data are read to verify that the legitimate passport-issuing authority signed the data being relied upon. Until these two key issues are addressed, CBP will continue to lack reasonable assurance that data found on e-passport computer chips have the necessary integrity; hence, the security enhancements that could be provided by e-passport computer chip data against counterfeiting and forgery are not completely realized. Protections designed into the U.S. e-passport computer chip limit the risks of malicious code being resident on the chip, a necessary precondition for a malicious code attack to occur from the chip against computer systems that read them. GPO and State have taken additional actions to decrease the likelihood that malicious code could be introduced onto the chip. While these steps do not provide complete assurance that the chips are free from malicious code, the limited communications between the e-passport chip and agency computers significantly lowers the risk that malicious code—if resident on an e-passport chip—could pose to agency computers. As we previously discussed, the e-passport’s digital signature can provide reasonable identification of unauthorized modification of the user data areas—including modifications resulting from the introduction of malicious code. Finally, given that no protection can be considered foolproof, DHS still needs to address deficiencies noted in our previous work on the US-VISIT computer systems to mitigate the impact of malicious code, should it infect those systems. Security features designed into the e-passport computer chips, including the digital signature, provide protections against the introduction of malicious code onto the chip during the e-passport booklet production process. For example, among other features, the chips include physical tamper protections that aid in sensing or thwarting physical attacks, a cryptographic authentication procedure to lock the contactless interface against unauthorized access, and incorporation of a digital signature that can be used to identify any unauthorized modification of the user data areas. As of 2007, NIST had not been able to identify any known cases of a malicious code attack against a computer network from a contactless chip. Nevertheless, both NIST and DHS agree that it is possible and have generally identified physical tamper attacks as threats to embedded electronic chips in contactless applications such as e-passports. Physical tamper attacks involve stripping away the chip’s outer coverings, exposing the electronic circuitry on the wafer, and analyzing or monitoring chip activity by inserting electronic probes onto components etched into the wafer. In general, the aim of such an attack is to discover confidential data stored on the chip—such as cryptographic keys—which can be used to open access to the chip via the contactless interface. In terms of a malicious code threat, the purpose then would be to write malicious code onto the chip via the RF interface. In its guide to chip-level security for contactless ICs, DHS identifies common methods used in physical tamper attacks on contactless ICs. For example, after removing top layers of plastic or other coverings and uncovering the electrical surfaces of the chip, attackers could probe into the various chip layers in an attempt to understand its processing. Common methods of physical attack are those related to (1) fault introduction, (2) IC monitoring, and (3) reverse engineering. The purpose of each of these attacks is ultimately to uncover secret information—such as cryptographic keys or passwords—that would allow an attacker to open the chip for read/write access via the contactless interface. In fault introduction, attackers attempt to introduce faults randomly, at specific times during the processing, or in specific locations on the IC circuitry, to gain additional information about the chip processing during such faults, which could provide clues to the memory location of secret keys. Similarly, such clues can be uncovered using IC monitoring, where readers or probes placed on the chip’s internal circuitry are used to monitor calculations or flows of data on the chip. Finally, attackers could attempt to reverse engineer the computer chip to decipher its hardware architecture and read the secret information. In its guide, DHS identifies countermeasures for each of these types of attack. For example, protections against fault introduction include implementing sensors that detect when parameters, such as light or temperature, vary outside of expected values. If such variations are sensed, the chip may automatically reset or even disable itself. Protections against IC monitoring might include encrypting the traffic flowing along the internal circuitry so that interpretation would be difficult. Protections against physical analysis include encrypting information stored in memory and scrambling the design of the logic contained in the operating system when laid down in memory during IC creation. Well-designed security microcontrollers, with numerous security features and support for mutual authentication and sophisticated cryptographic functions, can be designed to make it extremely difficult, costly, and time-consuming for attackers to compromise. In its solicitation for the e-passport covers, which included the computer chips, GPO specified several hardware and software requirements to protect against physical attack, including specific features to assist in protection against power and timing attacks. It also included requirements for sensors to monitor, for example, temperature and voltage variations, which might be indicative of a physical tamper attack. The chips used in the U.S. e-passports are considered security microcontrollers designed for applications where security is an important consideration, such as payment, identity, and secure access and, as such, they incorporate several features against physical tamper attacks. Both types of chips used in the e-passports have incorporated some recommended countermeasures for all of the common categories of attack identified by DHS. For example, the chips incorporate temperature and light sensors to monitor when those operating conditions vary from expected values and employ memory encryption against reverse engineering of the chip . While it is not possible to provide complete protection against the more invasive physical attacks, the goal is to make the cost of mounting such an attack prohibitive. While the threat of physical attack to the embedded chips in the e-passport cannot be completely discounted, the security features incorporated into the microcontrollers in U.S. e-passports make a physical tamper attack impractical. During production of the e-passport covers, the manufacturers, their subcontractors, and at GPO and State—or anywhere en route between these sites—the chips are protected from unauthorized access through the contactless interface by authentication procedures based on cryptography. The manufacturing and personalization process for the e-passport booklet is complex and involves many handoffs between different sites, companies, and sometimes different countries. For example, while both e-passport cover contractors originate chip manufacturing in Europe, they also send the chips to various third-party companies in Asia for additional manufacturing steps. The overall process can take almost 2 years from the time the chip leaves the fabrication plant until it is finally issued by the State Department to a bearer as part of an e-passport. During the production life cycle of the e-passport book—from chip creation at the chip manufacturers through to personalization by State— contactless access to the chip is controlled by a symmetric cryptography authentication procedure. Cryptographic algorithms provide different measures of strength, depending on the algorithm and the overall length of the keys involved. According to NIST estimates, the version used on the e-passports can, at best, provide protection from a brute force attack until 2030. This locking mechanism not only controls access to the chip, but differentially allows only certain functions to be performed. Several other design features limit the chance that malicious code could be placed on the chip. For example, according to GPO, an additional step used to protect the e-passport chips from unauthorized access during the manufacturing process takes advantage of standard industry practice to not include customer identification with chips during production runs. During the chip-manufacturing process, an anonymous cataloging scheme is employed that makes it difficult to associate bulk lots of chips with their destined applications. Therefore, on the production floor, it cannot be determined which chips are to be used in U.S. e-passports. In addition, after the chips are manufactured and incorporated into the e-passport cover, steps are taken by GPO and State to protect the user data areas of the chip from tampering. First, as part of its formatting procedures to prepare the chips for personalization, GPO ensures that the user data area is free from any data—including malicious code. During the formatting of the user data area, if any memory cell is found to be defective, then GPO discards the e-passport booklet. Therefore, any malicious code successfully implanted within the user data area after manufacture and through any of the chip’s travels through its production cycle up until it arrived at GPO would be erased from the chip. As we previously discussed, during the e-passport personalization process, a digital signature is applied to the data to help assure the integrity and authenticity of the data written to the chip. One of the benefits of the digital signature is that any insertion of malicious code into, for example, the bearer’s digital image would be caught, provided the digital signature is fully and properly verified. Such a successful check would provide reasonable assurance that malicious code has not been inserted into the user data areas of the chip memory since it was personalized by State. GPO and State have taken steps to gain confidence that their e-passport computer chips are secure. While these steps do not provide complete assurance that the chips are free from malicious code, the limited communications between the e-passport chip and agency computers significantly lowers the risk that malicious code that could be resident on an e-passport chip could pose to agency computers. The chips have been tested for both interoperability and conformance to ICAO specifications and exercised by GPO as part of their formatting process. The chips have undergone a formal, independent process to validate some aspects of their security. GPO and State also periodically conduct security reviews of the chip manufacturer sites. One key feature that mitigates the risk that malicious code on the chip could pose to agency computers is the highly restricted nature of the data exchange between the chip and agency computers during the reading of the e-passport. The e-passport computer chip adheres to ISO 14443 and ISO 7816-4 for communications through the contactless interface. The standards restrict the computer chip to a slave role whereby it responds only to a specific set of commands with known and limited response data. Because the chip cannot independently initiate communication with a reader, the flow of data from the chip to the reader and host computer can be precisely controlled and limited to only what is expected by the host computer. The result is that opportunities for the covert embedding of malicious code within data transferred from the chip to agency computers are correspondingly limited. For example, the passport number, bearer’s name, and date of birth are data sets restricted to a well-defined set of characters and are of fixed length. Consequently, if a reader accepts inputs only within these bounds, it will limit the risk posed by malicious code. The digital image of the bearer is the only data set transferred that is of enough size to provide for opportunities to hide malicious code. The image is formatted according to a standard graphics format that facilitates integrity checking of its contents. According to DHS officials, when e-passports are read, the data from the chip are verified both by the e-passport reader as well as by the agency host computer before the data are processed. Testing Helps to Verify Proper Functioning of E-passport Chip Communications Prior to contract award, and at various points thereafter, the U.S. e-passport chips have undergone testing for a variety of purposes. According to GPO officials, the solicitation for the e-passport covers was based on State Department requirements for specific functionality, security, performance, and availability. For example, it included requirements for the chip to meet ISO 14443 communications and ISO 7816-4 command set standards and other standard specifications. As part of the award selection process, GPO, State, NIST, and NSA conducted testing of sample books from each bidder to determine whether they would meet requirements as specified in the request for proposal. During pre-award testing, for example, GPO ran initial tests to ensure basic functionality as specified by ISO 7816-4, including the ability to initialize, read, write, and lock the chip. GPO also ensured that each e-passport cover was of the correct form and thickness so that it could mechanically pass through its production equipment suite. The sample booklets then went to State, which conducted tests to ensure the books could work with its personalization systems. According to NIST officials, they performed electronic testing that looked at the potential for eavesdropping, jamming, and remote activation (skimming). For eavesdropping, the test was conducted to determine whether the legitimate communication could be intercepted, but no attempt was made to see if the encrypted communication could be understood. For jamming, the purpose was to determine whether legitimate communications with the chip could be prevented. For remote activation, the purpose was to determine the distance from which a reader could elicit a response from the chip, but no attempt was made to test the basic access control or to read the data on the chip. NIST also conducted different types of durability tests including static bend, dynamic bend, climate, chemical resistance, physical protection of the integrated circuit chip, and electromagnetic testing. None of NIST’s tests were designed to test for the presence of malicious code on the chip. While the tests exercised some portions of ISO 14443 and ISO 7816-4, NIST did not conduct any tests to ensure full conformance with these standards. NSA officials stated that they conducted electronic testing of the booklet, but this was confined to radio frequency testing and shielding testing specifically tasked by GPO to evaluate the susceptibility of the booklet to skimming by looking at the distance over which the booklet’s chip could become energized. NSA performed no substantive tests of communication with the chip and no testing at all with regard to malicious code. As part of GPO’s normal pre-personalization processing, GPO exercises and tests each chip’s functionality to verify, among other things, the correct reading and writing of every chip. GPO’s processing does not systematically exercise every chip function or the full ISO 7816-4 command set and associated error handling. GPO officials said that while they test the basic functionality of the chip as they proceed through the pre-personalization processing, full ISO 14443 communications and ISO 7816-4 command set processing—including ensuring that all error handling is performed correctly—is done as part of the international ICAO interoperability and conformance tests held approximately every 2 years. The State Department is the official U.S. representative to these tests, although GPO frequently participates, by request, in support of State. According to ICAO, the interoperability and conformance tests are intended to accomplish two things. First, they ensure that e-passports from different countries can be read by readers provided by multiple vendors. Second, they ensure compliance with various aspects of the ISO 14443 communication and ISO 7816-4 command set standards. The U.S e-passport chips have been part of some of the interoperability and conformance tests that have been run in the last several years. All these tests provide important assurances for their stated purposes by exercising functionality, in particular the limited e-passport chip communications, that helps to protect against the risk of malicious code. In general though, such testing is limited to verifying functionality and cannot provide absolute assurance that malicious code has not been implanted onto the e-passport computer chip. The creation of the computer chip used in U.S. e-passports is a complex process that involves many components created by different entities. Because the U.S. government does not control the entire supply chain for all the components on the chip, it relies on security features provided by the chip component suppliers, the extent to which these suppliers test and certify their products, and the extent to which these suppliers develop and produce the chips in a secure manner. Some Aspects of the Security of the Chips Were Certified Using Common Criteria NIST guidelines state that federal agencies should give substantial consideration in IT procurements to products that have been evaluated and tested by accredited laboratories against appropriate security specifications and requirements. One established mechanism for providing security evaluation and testing services for commercial-off-the- shelf hardware, software, or firmware is Common Criteria. Common Criteria certifications are a well-known international standard mechanism for validating and documenting various security aspects of IT products. Evaluations are performed by accredited Common Criteria testing laboratories whose results are then certified by a validation body. In the case of the chips used in the U.S. e-passports, selected security features of their hardware components were evaluated using Common Criteria by a recognized European laboratory and certified by Germany’s Common Criteria certification body. In its solicitation for the e-passport covers, including the computer chips, GPO specified that preference will be given to computer chips that are certified at Common Criteria EAL 4+ against a Common Criteria-compliant Protection Profile. According to Common Criteria definitions, an EAL 4 rating is intended to provide a moderate to high level of independently assured security. To achieve this rating, the testing lab must conduct a variety of structured activities, including an analysis of the security functions of the product using a complete interface specification and both the high-level and low-level design of the specific features of the product being tested, review and confirmation of any vendor testing that was conducted, and conduct of an independent vulnerability analysis demonstrating resistance to penetration attackers with a low attack potential. The computer chips selected for use in the e-passports each had received an EAL 5+ rating against a compliant Protection Profile. According to Common Criteria, an EAL 5 rating incorporates all of the EAL 4 requirements and, in addition, requires, among other things, semiformal design descriptions, a more structured architecture, covert channel analysis, and improved mechanisms that provide confidence that the particular implementation of the product being evaluated has not been tampered with during development. Specific security features evaluated to achieve the EAL 5 rating include many useful in helping to prevent the introduction of malicious code. Examples of these include support for cryptographic functions, protections against physical manipulation, and features to ensure correct operating conditions for the chip. However, a key software component of the chip—the operating system— was excluded from the evaluation. The operating system on the chip implements and controls, among other functions, the ISO 7816-4 command set that is the primary means of communication between the chip and the outside world—including agency computers. Under Common Criteria, it is not uncommon for critical components of a product to be excluded for particular evaluations. In particular, the exclusion of important software components, such as the operating system, from the Common Criteria evaluation of hardware features is not unusual because the higher-level software embedded on chips is often a third-party product and not designed by the chip manufacturer itself. The chip manufacturer is typically not responsible for undertaking a Common Criteria evaluation of third-party embedded software used on its chips. Typically, it would be up to the software provider to get its product certified using Common Criteria. However, this is an expensive and time- consuming process. Hence, care needs to be taken with Common Criteria certifications that can be meaningfully understood only within the context of the specific subset of security functions included in the evaluation. We have previously noted that one of the challenges in using the National Information Assurance Partnership is the difficulty in matching agencies’ needs with the availability of NIAP-evaluated products. According to Infineon and Gemalto officials, back in 2006 when the request for proposal for the e-passport covers was issued, there was no Protection Profile available that covered the operating systems of such chips. Since that time, however, Common Criteria operating systems suitable for use on smart cards have become available. According to GPO officials, Infineon provides such chips today, and GPO is in the process of transitioning them into production so that, at least for the Infineon line, the e-passports will include a Common Criteria-certified operating system. The user operating system contains arguably most of the software functioning on the chip. Therefore, obtaining assurance as to its secure functioning and freedom from malicious code is an important activity. However, given the highly restricted nature of the current communications between the chip and agency computers, we do not see the lack of Common Criteria certification of the chip operating system as significantly increasing the risk to agency computers from malicious code. While Common Criteria certification confers some assurance regarding the specific security functions included in the evaluation, care must be taken in extending that assurance into confidence in the overall security of the product for its intended use. GAO has previously reported that within its limitations, the Common Criteria process provides benefits. However, the lack of performance measures leaves questions unanswered as to its true effectiveness. The use of commercial products that have been independently tested and evaluated is only a part of a security solution that contributes to the overall information assurance of a product. GPO Has Conducted Reviews of the E-passport Computer Chip Manufacturing Sites Prior to contract award, and periodically thereafter, GPO—sometimes accompanied by the State Department—conducted on-site security reviews of the companies that manufacture the e-passport chips and the covers, and of some of their subcontractors. According to GPO officials, its reviews are concerned with not just security risks, but also with other risks—for example, the extent to which a site performs continuity of operations planning or the risk that a single source of supply for one of the components might pose a risk to the delivery of the components. In conducting the security reviews, GPO officials stated that they make an attempt to visit every vendor involved in the production of the e-passport booklet, including, for example, the security ink suppliers, paper providers, thread providers, and the chip providers. The sites are spread across several countries, and within some countries there may be multiple sites. For example, for both Infineon and Gemalto, production of the chips involves several sites within Europe. These reviews employ an American National Standards Institute (ANSI) standard for security product manufacturing that covers a variety of risk areas, including information, IT, material, supply chain, physical intrusion, personnel, and disaster recovery. For example, the standard addresses such concerns as proper controlled access to restricted areas within a facility. During the security review, GPO generally gets a high-level briefing from the company and talks with staff at the site. According to GPO officials, they have reviewed almost every site twice since March 2006. In recent security reviews of the chip manufacturing sites, both Infineon and NXP were found to be in compliance with their own stated security policies and meeting the Class 1 level of the ANSI standard. From the security reviews, GPO can get some sense of some of the protections in place at the development sites—for example, access control to development areas and security awareness training. GPO learned through its reviews, for example, that Gemalto has an access control policy wherein development premises are divided into secure and nonsecure zones, and the operating system development is in the secured zone. This provides some assurance that since physical access to the software destined for the chips is controlled, opportunities for the inclusion of malicious code can be limited. Given that there can be no guarantees against a malicious code attack originating from the e-passport computer chip, agency systems need to have a strong security posture, in accordance with federal government standards. We have previously reported on weaknesses in DHS’s US-VISIT computer systems, which could increase the ability of malicious code to infect and propagate through agency computers. Weaknesses, such as unpatched software vulnerabilities, can invite a malicious code attack and enhance the ability of the attack to spread across the network by leaving important linkages within the network unprotected. DHS needs to address these deficiencies to ensure that any malicious code resident on the e-passport chip and read onto DHS computers can be contained and its effect minimized. One of the strong recommendations from NIST is that computer systems run antivirus software, which scans systems’ files and memory spaces for known malware. NIST strongly recommends the use of antivirus software to identify and protect against malicious code. Detecting such code prior to its further spread can limit a malicious code infection and protect downstream systems. According to DHS officials, workstations that control the interface with the chip are protected by antivirus software, which includes access protections, buffer overflow protections, and scanning of files as they are accessed. One of the key weaknesses in US-VISIT that we found in 2007—patch management—is of particular concern with respect to malicious code that could be read from an e-passport. Malicious code often attacks systems by exploiting vulnerabilities in operating systems, services, and applications. When software vulnerabilities are discovered, the software vendor may develop and distribute a patch or workaround to mitigate the vulnerability. Patch management is, therefore, an important element in mitigating the risks associated with malicious code and the vulnerabilities they depend on. NIST’s, NSA’s, and DHS’s own policies stress the importance of keeping computer systems up to date with security patches. Outdated and unsupported software is more vulnerable to attacks and exploitation. NIST guidelines state that applying patches is one of the most effective ways of reducing the risk of malware incidents. In our prior report, we noted that while DHS has taken steps to ensure that patches for the workstations’ operating system were kept up to date, some workstations at the ports of entry did not consistently maintain secure configurations. As a result, vulnerabilities left unpatched on those systems increase the chance of malicious code being executed should it get ingested. According to DHS officials, they are in the midst of upgrading workstations to a version of Microsoft Windows that contains features to help prevent the execution of malicious code—for example, special services to detect and prevent the execution of code from the data areas. DHS needs to ensure that it completes the upgrade of the workstations and that such services are enabled on workstations reading data from the e-passport computer chips. Ensuring the integrity of passports requires continual vigilance so that they can continue to be used to support the critical border security mission—facilitating the travel of those who are entitled to enter the United States while preventing the entry of those who are not. A well- designed passport has limited utility if it is not well produced or border officers do not utilize the available security features to detect attempts to fraudulently enter the United States. While U.S. e-passport covers, including the embedded computer chip, are manufactured by foreign companies, State’s public key infrastructure, which is used to generate digital signatures during the personalization process for each issued passport, can provide reasonable assurance that the data written onto the chip were authored by State and have not been altered. However, DHS has not implemented the capabilities needed for CBP officers to fully utilize this security feature. Without e-passport readers at the ports of entry or a system that allows for the full validation of digital signatures on e-passports, CBP officers’ inspection of not only U.S. e-passports, but also of e-passports issued by foreign countries, including those participating in the visa waiver program, is affected. Without these capabilities, the additional security against forgery and counterfeiting that could be provided by the inclusion of computer chips on e-passports issued by the United States and foreign countries, including those participating in the visa waiver program, is not fully realized. While the use of e-passports and radio frequency communications represents another potential attack vector to federal computer systems, the risk posed by the transmission of malicious code on U.S. e-passports is not significant. The U.S. e-passport chips have security features that minimize the threat of tampering during the manufacturing and production process. GPO and State have also taken steps to assure the security of the embedded computer chips in U.S. e-passports. Because the communications between e-passport computer chips and federal computer systems have been designed to be limited, the opportunities for transfer of malicious code are correspondingly limited. Combined, these measures significantly reduce the risks from someone using e-passport computer chips as a conveyance for malicious code to federal computer systems. To ensure that border officers can more fully utilize the security features of electronic passports, we recommend that the Secretary of Homeland Security take the following two actions to provide greater assurance that electronic passport data were written by the issuing nation and have not been altered or forged: Design and implement the systems functionality and databases needed to fully verify electronic passport digital signatures at U.S. ports of entry. In coordination with the Secretary of State, develop and implement an approach to obtain the digital certificates necessary to validate the digital signatures on U.S. and other nations’ electronic passports. We provided draft copies of this report to the Secretaries of State and Homeland Security and to the Public Printer at the Government Printing Office for review and comment. We received formal written comments from the Department of Homeland Security, which are reprinted in appendix III. In its comments, DHS concurred with our recommendations. However, DHS believes that the report incorrectly portrays CBP’s ability to detect the fraudulent use of U.S. passports. DHS cites the ability of CBP’s officers to access U.S. passport application data from State and use it to detect impostors and altered data in U.S. passports. We agree that providing State passport data to CBP officers during the inspection process enhances their ability to detect the fraudulent use of U.S. e-passports. Nevertheless, while State has expended significant resources to produce an e-passport that includes contactless chip technology and public key cryptography to help prevent counterfeiting and forgery, DHS has not implemented the capabilities to fully utilize these security features and is not fully realizing the security benefits of the inclusion of electronic technology on e-passports. We received informal comments from the State Department. State believes that the draft report presents a comprehensive and balanced assessment of the security of the e-passport design. We also received technical comments from State, GPO, and DHS, which we incorporated in the report, as appropriate. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the report date. At that time, we will send copies of this report to the Secretaries of State and Homeland Security and the Public Printer. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4499 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To determine whether e-passport chips can be altered or forged so that a traveler could fraudulently enter the United States, we interviewed officials from State’s Bureau of Consular Affairs and reviewed State Department policies, procedures, and guidance documents regarding the public key infrastructure (PKI) used to protect the data on the e-passport computer chip and assessed them against relevant International Civil Aviation Organization (ICAO) and National Institute of Standards and Technology (NIST) standards and guidelines. We interviewed officials at one passport issuance agency and reviewed systems documentation to understand how U.S. e-passports are personalized. We determined the extent to which U.S. e-passport computer chips are inspected at U.S. ports of entry by interviewing Department of Homeland Security (DHS) officials and reviewing documentation regarding the systems and procedures used to inspect e-passports at the ports of entry. Within DHS, we met with officials from the U.S Customs and Border Protection (CBP), the Screening Coordination Office, and the United States Visitor and Immigrant Status Indicator Technology (US-VISIT) program office. To determine whether malicious code on the e-passport chips poses a risk to national security, we determined how U.S. e-passport computer chips are manufactured and incorporated into the production of blank U.S. e-passport booklets based on interviews with the Government Printing Office (GPO) and manufacturer officials and our reviews of GPO documentation. We met with officials from NIST and the National Counterterrorism Center to determine the level of threat that exists to U.S. e-passports. We interviewed GPO and State officials and reviewed documentation that describes the U.S. e-passport computer chip architecture and operations. We reviewed documents governing the manufacturing of the blank e-passport covers, including GPO contracts with the manufacturers and the memorandum of understanding between GPO and State. We determined that for malicious code on the e-passport computer chip to be a risk to agency computers, it must first get on the chip, then get transferred off the chip and onto agency computers, and then subsequently get executed. Therefore, we identified and evaluated protections that have been designed into the e-passport computer chip to reduce the possibility of malicious code being introduced onto the chip, controls in place to limit the transfer of malicious code off of the chip and onto agency computers, and the security posture of the agency computer systems interfacing with the e-passport chip. We also reviewed the results of testing conducted on the e-passport computer chips by GPO, NIST, the National Security Agency, and ICAO, and through the Common Criteria program. We discussed and reviewed the results of security reviews conducted by GPO. We met with GPO, State, and CBP officials to understand how each agency interacts with the e-passport computer chips and the potential risk that malicious code could pose to these agencies. We conducted this performance audit from June 2008 to January 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Cryptography is the transformation of ordinary data (commonly referred to as plaintext) into a code form (ciphertext) and back into plaintext using a special value known as a key and a mathematical process called an algorithm. Cryptography can be used on data to (1) hide their information content, (2) prevent their undetected modification, and/or (3) prevent their unauthorized use. A basic premise in cryptography is that good systems depend only on the secrecy of the key used to perform the operations rather than on any attempt to keep the algorithm secret. The algorithms used to perform most cryptographic operations over the Internet are well known; however, because the keys used by these algorithms are kept secret, the process is considered secure. The basis of PKI’s security assurances is a sophisticated cryptographic technique known as public key cryptography, which employs algorithms designed so that the key that is used to encrypt plaintext cannot be calculated from the key that is used to decrypt the ciphertext. These two keys complement each other in such a way that when one key is used for encryption, only the other key can decrypt the ciphertext. One of these keys is kept private and is known as the private key, while the other key is widely published and is referred to as the public key. When used as shown in figure 4, public key cryptography can help to assure data confidentiality because only the private key can be used to decrypt the information encrypted using the public key. When used as shown in figure 5, public key cryptography can help provide authentication, nonrepudiation, and data integrity because the public key will only work to decrypt the information if it was encrypted using the private key. In both cases, ensuring the security of the private key is vital to providing the necessary security protections. If the private key is compromised, there can be little assurance that data confidentiality, authentication, and data integrity can be provided by the PKI. Cryptographic techniques are used to generate and manage the key pairs (a public key and private key), which are in turn used to create electronic “certificates,” which link an individual or entity, such as State, to its public key. These certificates are then used to verify digital signatures (providing authentication and data integrity). Public key cryptography can be used to create a digital signature for a message or transaction, thereby providing authentication, data integrity, and nonrepudiation. For example, if Bob wishes to digitally sign an electronic document, he can use his private key to encrypt it. His public key is freely available, so anyone with access to his public key can decrypt the document. Although this seems backward because anyone can rea what is encrypted, the fact that Bob’s private key is held only by Bob provides the basis for Bob’s digital signature. If Alice can successfully decrypt the document using Bob’s public key, then she knows that the message came from Bob because only he has access to the corresponding private key. Of course, this assumes that (1) Bob has sole control ov er his private signing key and (2) Alice is sure that the validate Bob’s messages really belongs to Bob. Digital signature systems use a two-step process, as shown in figure 6. First, a hash algorithm is used to condense the data into a message digest. First, a hash algorithm is used to condense the data into a message digest. Second, the message digest is encrypted using Bob’s private signing key to Second, the message digest is encrypted using Bob’s private signing key to create a digital signature. Because the message digest will be different for create a digital signature. Because the message digest will be different for each signature, each signature will also be unique, and using a good hash each signature, each signature will also be unique, and using a good hash algorithm, it is computationally infeasible to find another message that will algorithm, it is computationally infeasible to find another message that will generate the same message digest. generate the same message digest. Alice (or anyone wishing to verify the document) can compute the message digest of the document and decrypt the signature using Bob’s public key, as shown in figure 7. Assuming that the message digests matc Alice then has three kinds of security assurance. First, that Bob actu ally h, signed the document (authentication). Second, the digital signature ensures that Bob in fact sent the message (nonrepudiation). And third, because the message digest would have changed if anything in the message had been modified, Alice knows that no one tampered with the contents of the document after Bob signed it (data integrity). Again, this Again, thisassumes that (1) Bob has sole control over his private signing key and assumes that (1) Bob has sole control over his private signing key and (2) Alice is sure t (2) Alice is sure that the public key used to validate Bob’s messages really belongs to Bob. hat the public key used to validate Bob’s messages really belongs to Bob. A digital certificate is an electronic credential that guarantees the association between a public key and a specific entity. It is created placing the entity’s name, the entity’s public key, and certain other identifying information in a s directory or other database. mall electronic document that is stored in a Directories may be publicly available repositories kept on servers that act like telephone books for users to look up others’ public keys. The digital certificate itself is created by a trusted third party called a certification authority, which digitally signs the certificate, thus providing assurance that the public key contained in the certificate does indeed belong to the individual or organization named in the certificate. A certification authority is responsible for managing digital certificates. The purpos the certification authority is to oversee the generation, distribut ion, renewal, revocation, and suspension of digital certificates. The certification authority may set restrictions on a certificate, such as the starting date for which the certificate is valid as well as its expira It is at times necessary to revoke digital certificates before their established expiration dates, for example, when the private key is compromised. Therefore, the certification authority is also responsible providing certificate status information and may publish a certificat revocation list in a directory or maintain an online status-checking mechanism. The PKI software in the user’s computer can verify that the certificate is valid by first verifying that the certificate has no then by assuring that it has not been revoked or suspended. In addition to the contact named above, William Carrigg, Richard Hung, and John C. Martin made key contributions to this report.
In 2005, the Department of State (State) began issuing electronic passports (e-passports) with embedded computer chips that store information identical to that printed in the passport. By agreement with State, the U.S. Government Printing Office (GPO) produces blank e-passport books. Two foreign companies are used by GPO to produce e-passport covers, including the computer chips embedded in them. At U.S. ports of entry, the Department of Homeland Security (DHS) inspects passports. GAO was asked to examine potential risks to national security posed by using foreign suppliers for U.S. e-passport computer chips. This report specifically examines the following two risks: (1) Can the computer chips used in U.S. e-passports be altered or forged to fraudulently enter the United States? (2) What risk could malicious code on the U.S. e-passport computer chip pose to national security? To conduct this work, GAO reviewed documents and interviewed officials at State, GPO, and DHS relating to the U.S. e-passport design and manufacturing and e-passport inspection systems and procedures. State has developed a comprehensive set of controls to govern the operation and management of a system to generate and write a security feature called a digital signature on the chip of each e-passport it issues. When verified, digital signatures can help provide reasonable assurance that data placed on the chip by State have not been altered or forged. However, DHS does not have the capability to fully verify the digital signatures because it has not deployed e-passport readers to all of its ports of entry and it has not implemented the system functionality necessary to perform the verification. Because the value of security features depends not only on their solid design, but also on an inspection process that uses them, the additional security against forgery and counterfeiting that could be provided by the inclusion of computer chips on e-passports issued by the United States and foreign countries, including those participating in the visa waiver program, is not fully realized. Protections designed into the U.S. e-passport computer chip limit the risks of malicious code being resident on the chip, a necessary precondition for a malicious code attack to occur from the chip against computer systems that read them. GPO and State have taken additional actions to decrease the likelihood that malicious code could be introduced onto the chip. While these steps do not provide complete assurance that the chips are free from malicious code, the limited communications between the e-passport chip and agency computers significantly lowers the risk that malicious code--if resident on an e-passport chip--could pose to agency computers. Finally, given that no protection can be considered foolproof, DHS still needs to address deficiencies noted in our previous work on its computer systems to mitigate the impact of any malicious code that may be read from e-passport computer chips and infect those systems.
Medicare covers up to 100 days of care in a SNF after a beneficiary has been hospitalized for at least 3 days. To qualify for the benefit, the patient must need skilled nursing or therapy on a daily basis. For the first 20 days of SNF care, Medicare pays all the costs, and for the 21st through the 100th day, the beneficiary is responsible for daily coinsurance of $95 in 1997. physician; and have the services furnished under a plan of care prescribed and periodically reviewed by a physician. If these conditions are met, Medicare will pay for skilled nursing; physical, occupational, and speech therapy; medical social services; and home health aide visits. Beneficiaries are not liable for any coinsurance or deductibles for these home health services, and there is no limit on the number of visits for which Medicare will pay. Medicare covers care in rehabilitation hospitals that specialize in such care and units within acute-care hospitals that also specialize. To qualify, beneficiaries must have one or more conditions requiring intensive and multidisciplinary rehabilitation services on an inpatient basis. In addition, to qualify as a rehabilitation facility, hospitals and units in acute-care hospitals must demonstrate their status by such factors as furnishing primarily intensive rehabilitation services to an inpatient population, at least 75 percent of whom require treatment of 1 or more of 10 specified conditions (for example, stroke or hip fracture). Rehabilitation facilities must also use a treatment plan for each patient that is established, reviewed, and revised as needed by a physician in consultation with other professional personnel. Inpatient rehabilitation is treated like any other hospitalization for beneficiary cost-sharing purposes. agencies, and exceptions to the limits are available to those that can show that their costs are above the limits for reasons not under their control. Inpatient rehabilitation care, provided at both rehabilitation hospitals and units of acute-care hospitals, is exempt from Medicare’s hospital prospective payment system (PPS), but is subject to the payment limitations and incentives established by the Tax Equity and Fiscal Responsibility Act of 1982 (TEFRA). Under this law, Medicare pays these facilities the lower of the facility’s average Medicare allowable inpatient operating costs per discharge or its target amount. The target amount is based on the provider’s allowable costs per discharge in a base year,trended to the current year through an annual update factor. A TEFRA facility with inpatient operating costs below its ceiling receives its costs plus 50 percent of the difference between these costs and the ceiling or 5 percent of the ceiling, whichever is less. Rehabilitation facilities receive cost-based payments without regard to the TEFRA limits until they complete a full cost-reporting year, and that year is then used as their base year. Long-term care hospitals are another category exempted from the hospital PPS. To qualify as long term, hospitals must have an average length of stay of a least 25 days for their Medicare patients. Medicare pays these hospitals on the basis of their costs, subject to TEFRA limits, just like rehabilitation hospitals. The number of long-term care hospitals has grown from 94 in 1986 to 146 in 1994, and Medicare payments to them have increased considerably from about $200 million in 1989 to about $800 million in 1994. However, these hospitals remain a small part of the Medicare program, representing less than 0.5 percent of expenditures, and little research or analysis has been done on them. As a result, little is known about the reasons for the growth that has occurred in the long-term care hospital area. rehabilitation facilities but is not included in the administration’s fiscal year 1998 budget proposals. The Medicare SNF, home health, and inpatient rehabilitation benefits are three of the fastest growing components of Medicare spending. From 1989 to 1996, Medicare part A SNF expenditures increased over 300 percent, from $2.8 billion to $11.3 billion. During the same period, part A expenditures for home health increased from $2.4 billion to $17.7 billion—an increase of over 600 percent. Rehabilitation facility payments increased from $1.4 billion in 1989 to $3.9 billion in 1994, the latest year for which complete data were available. SNF payments currently represent 8.6 percent of part A Medicare expenditures; home health, 13.5 percent; and rehabilitation facilities, 3.4 percent. At Medicare’s inception in 1966, the home health benefit under part A provided limited posthospital care of up to 100 visits per year after a hospitalization of at least 3 days. In addition, the services could only be provided within 1 year after the patient’s discharge and had to be for the same illness. Part B coverage of home health also was limited to 100 visits per year. These restrictions under part A and part B were eliminated by the Omnibus Reconciliation Act of 1980 (ORA) (P.L. 96-499), but little immediate effect on Medicare costs occurred. With the implementation of the Medicare inpatient PPS in 1983, use of the SNF and home health benefits was expected to grow as patients were discharged from the hospital earlier in their recovery periods. But HCFA’s relatively stringent interpretation of coverage and eligibility criteria held growth in check for the next few years. As a result of court decisions in the late 1980s, HCFA issued guideline changes for the SNF and home health benefits that had the effect of liberalizing coverage criteria, thereby making it easier for beneficiaries to obtain SNF and home health coverage. Additionally, the changes prevent HCFA’s claims processing contractors from denying physician-ordered SNF or home health services unless the contractors can supply specific clinical evidence that indicates which particular services should not be covered. changes.) For example, ORA 1980 and HCFA’s 1989 home health guideline changes have essentially transformed the home health benefit from one focused on patients needing short-term posthospital care to one that serves chronic, long-term care patients as well. The number of beneficiaries receiving home health care more than doubled in the last few years, from 1.7 million in 1989 to about 3.9 million in 1996. During the same period, the average number of visits to home health beneficiaries also more than doubled, from 27 to 72. In a recent review of home health care, we found that from 1989 to 1993, the proportion of home health users receiving more than 30 visits increased from 24 to 43 percent and those receiving more than 90 visits tripled, from 6 to 18 percent, indicating that the program is serving a larger proportion of longer-term patients. Moreover, about a third of beneficiaries receiving home health care did not have a prior hospitalization, another possible indication that care for chronic conditions is being provided. Similarly, the number of people receiving care from SNFs has also almost doubled, from 636,000 in 1989 to 1.1 million in 1996. While the average length of a Medicare-covered SNF stay has not changed much during that time, the average Medicare payment per day has almost tripled—from $98 in 1990 to $292 in 1996. Use of ancillary services, such as physical and occupational therapy, has increased dramatically and accounts for most of the growth in per-day cost. For example, our analysis of 1992 through 1995 SNF cost reports shows that reported ancillary costs per day have increased 67 percent, from $75 per day to $125 per day, while reported routine costs per day have increased only 20 percent, from $123 to $148. Unlike routine costs, which are subject to limits, ancillary services are only subject to medical necessity criteria, and Medicare does relatively little review of their use. Moreover, SNFs can cite high ancillary service use to justify an exception to routine service cost limits, thereby increasing payments for routine services. 1994, patients with any of 12 DRGs commonly associated with posthospital SNF use had 4- to 21-percent shorter stays in hospitals with SNF units than patients with the same DRGs in hospitals without SNF units. Additionally, by owning a SNF, hospitals can increase their Medicare revenues through receipt of the full DRG payment for patients with shorter lengths of stay and a cost-based payment after the patients are transferred to the SNF. The availability of inpatient rehabilitation beds has also increased dramatically. Between 1986 and 1994, the number of Medicare-certified rehabilitation facilities grew from 545 to 1,019, an 87-percent increase. A major portion of this growth represents the increase in rehabilitation units located in PPS hospitals, which went from 470 to 824 over the same period. Inpatient rehabilitation admissions for Medicare beneficiaries increased from 2.9 per 1,000 in 1986 to 7.2 per 1,000 in 1993, or 148 percent. Some of this increase in beneficiary use was due to increases in the number of acute-care admissions that often lead to use of rehabilitation facilities. For example, the DRG that includes hip replacement grew from 218,000 discharges during fiscal year 1989 to 344,000 in fiscal year 1995. For the same DRG, average length of stay in acute-care hospitals decreased from 12 to 6.7 days over that period. As was the case with SNFs, beneficiaries admitted to rehabilitation units in 1994 following a stay in an acute-care hospital had shorter average lengths of stay than beneficiaries admitted to rehabilitation hospitals. They also had shorter stays in the acute-care hospital. Moreover, the same scenario that applies to hospital-based SNFs applies to rehabilitation units. The quicker that hospitals discharge a patient to the rehabilitation unit, the lower that patient’s acute-care costs are. By having a rehabilitation unit, hospitals can increase their Medicare revenues through receipt of the full DRG payment for patients with shorter lengths of stay and a cost-based payment after the patients are admitted to rehabilitation. Rapid growth in SNF and home health expenditures has been accompanied by decreased, rather than increased, funding for program safeguard activities. For example, our March 1996 report found that part A contractor funding for medical review had decreased by almost 50 percent between 1989 and 1995. As a result, while contractors had reviewed over 60 percent of home health claims in fiscal year 1987, their review target had been lowered by 1995 to 3.2 percent of all claims (or sometimes, depending on available resources, to a required minimum of 1 percent). We found that a lack of adequate controls over the home health program, such as little intermediary medical review and limited physician involvement, makes it nearly impossible to know whether the beneficiary receiving home health care qualifies for the benefit, needs the care being delivered, or even receives the services being billed to Medicare. Also, because of the small percentage of claims now selected for review, home health agencies that bill for noncovered services are less likely to be identified than they were 10 years ago. Similarly, the low level of review of SNF services makes it difficult to know whether the recent increase in ancillary service use is legitimate (for example, because patient mix has shifted toward those who need more services) or is simply a way for SNFs to get more revenues. Medicare’s peer review organization (PRO) contractors have responsibility for oversight of Medicare inpatient rehabilitation hospitals and units from both utilization and quality-of-care perspectives. However, the PROs’ emphasis has changed in recent years, with a greater focus on quality reviews and less emphasis on case review. In fact, the current range of work for PROs requires no specific review for the appropriateness of inpatient rehabilitation use. Finally, because relatively few resources have been available for auditing end-of-year provider cost reports, HCFA has little ability to identify whether home health agencies, SNFs, and rehabilitation facilities are charging Medicare for costs unrelated to patient care or other unallowable costs. Because of the lack of adequate program controls, it is quite possible that some of the recent increase in home health, SNF, and rehabilitation facility expenditures stems from abusive practices. The Health Insurance Portability and Accountability Act of 1996 (P.L. 104-191), also known as the Kassebaum-Kennedy Act, has increased funding for program safeguards. However, per-claim expenditures will remain below the level they were in 1989, after adjusting for inflation. We project that, in 2003, payment safeguard spending as authorized by Kassebaum-Kennedy will be just over one-half of the 1989 per-claim level, after adjusting for inflation. The goal in designing a PPS is to ensure that providers have incentives to control costs and that, at the same time, payments are adequate for efficient providers to furnish needed services and at least recover their costs. If payments are set too high, Medicare will not save money and cost-control incentives can be weak. If payments are set too low, access to and quality of care can suffer. In designing a PPS, selection of the unit of service for payment purposes is important because the unit used has a strong effect on the incentives providers have for the quantity and quality of services they provide. Taking into account the varying needs of patients for different types of services—routine, ancillary, or all—is also important. A third important factor is the reliability of the cost and utilization data used to compute rates. Good choices for unit of service and cost coverage can be overwhelmed by bad data. We understand that the administration will propose a SNF PPS that would pay per diem rates covering all facility cost types and that payments would be adjusted for differences in patient case mix. Such a system is expected to be similar to HCFA’s ongoing SNF PPS demonstration project that is testing the use of per diem rates adjusted for resource need differences using the Resource Utilization Group, version III (RUG-III) patient classification system. This project was recently expanded to include coverage of ancillary costs in the prospective payment rates. An alternative to the proposal’s choice of a day of care as the unit of service is an episode of care—the entire period of SNF care covered by Medicare. While substantial variation exists in the amount of resources needed to treat beneficiaries with the same conditions when viewed from the day-of-care perspective, even more variation exists at the episode-of-care level. Resource needs are less predictable for episodes of care. Moreover, payment on an episode basis may result in some SNFs inappropriately reducing the number of covered days. Both factors make a day of care the better candidate for a PPS unit of service. Furthermore, the likely patient classification system, RUG-III, is designed for and being tested in a per diem PPS. On the other hand, a day-of-care unit gives few, if any, incentives to control length of stay, so a review process for this purpose would still be needed. The states and HCFA have a lot of experience with per diem payment methods for nursing homes under the Medicaid program, primarily for routine costs but also, in some cases, for total costs. This experience should prove useful in designing a per diem Medicare PPS. services, particularly therapy services. This, in turn, means that it is important to give SNFs incentives to control ancillary costs, and including them under PPS is a way to do so. However, adding ancillary costs does increase the variability of costs across patients and places additional importance on the case-mix adjuster to ensure reasonable and adequate rates. Turning to the adequacy of HCFA’s databases for SNF PPS rate-setting purposes, our work, and that of the Department of Health and Human Services’ (HHS) Inspector General, has found examples of questionable costs in SNF cost reports. For example, we found extremely high charges for occupational and speech therapy with no assurance that cost reports reflected only allowable costs. Cost-report audits are the primary means available to ensure that SNF cost reports reflect only allowable costs. However, the resources expended on auditing cost reports have been declining in relation to the number of SNFs and SNF costs for a number of years. The percentage of SNFs subjected to field audits has decreased as has the extent of auditing done at the facilities that are audited. Under these circumstances, we think it would be prudent for HCFA to do thorough audits of a projectable sample of SNF cost reports. The results could then be used to adjust cost-report databases to remove the influence of unallowable costs, which would help ensure that inflated costs are not used as the base for PPS rate setting. The summary of the administration’s proposal for a home health PPS is very general, saying only that a PPS for an appropriate unit of service would be established in 1999 using budget neutral rates calculated after reducing expenditures by 15 percent. HCFA estimates that this reduction will result in savings of $4.7 billion over fiscal years 1999 through 2002. period of time such as 30 or 100 days as the unit of service has a greater potential for controlling costs. However, agencies could gain by reducing the number of visits during that period, potentially lowering quality of care. If an episode of care is chosen as the unit of service, HCFA would need a method to ensure that beneficiaries receive adequate services and that any reduction in services that can be accounted for by past overprovision of care does not result in windfall profits for agencies. In addition, HCFA would need to be vigilant to ensure that patients meet coverage requirements, because agencies would be rewarded for increasing their caseloads. HCFA is currently testing various PPS methods and patient classification systems for possible use with home health care, and the results of these efforts may shed light on how to best design a home health PPS. We have the same concerns about the quality of HCFA’s home health care cost-report databases for PPS rate-setting purposes that we do for the SNF database. Again, we believe that adjusting the home health databases, using the results of thorough cost-report audits of a projectable sample of agencies, would be wise. We are also concerned about the appropriateness of using current Medicare data on visit rates to determine payments under a PPS for episodes of care. As we reported in March 1996, controls over the use of home health care are virtually nonexistent. Operation Restore Trust, a joint effort by federal and state agencies in several states to identify fraud and abuse in Medicare and Medicaid, found very high rates of noncompliance with Medicare’s coverage conditions in targeted agencies. For example, in a sample of 740 beneficiaries drawn from 43 home health agencies in Texas and 31 in Louisiana that were selected because of potential problems, some or all of the services received by 39 percent of the beneficiaries were denied. About 70 percent of the denials were because the beneficiary did not meet the homebound definition. Although these are results from agencies suspected of having problems, they illustrate that substantial amounts of noncovered care are likely to be reflected in HCFA’s home health care utilization data. For these reasons, it would also be prudent for HCFA to conduct thorough on-site medical reviews of a projectable sample of agencies to give it a basis to adjust utilization rates for purposes of establishing a PPS. detailing a model for a PPS is currently undergoing review. The research was directed at designing a per-episode payment system adjusted for case mix, using a measure of patient functional status—for example, the patient’s mobility—as the adjuster. In general, this and other research has shown that patients in the rehabilitation facilities are more homogeneous than those in SNFs or home health care. Because the goals for the care are also more homogeneous and defined, an episode may be a reasonable choice for a unit of service. Again, the per-episode payment should be structured to reduce the incentives for premature discharge, and adequate review mechanisms to prevent such discharges and other quality problems would be needed. As with SNFs and home health care, we have concerns about the reliability of HCFA’s databases for rate-setting purposes for rehabilitation hospitals because of the low levels of utilization review and cost-report auditing. As we stated earlier, HCFA should do enough audits and medical review to enable it to adjust its databases to remove the effects of any problems. HCFA would also need an adequate review system under a PPS because rehabilitation facilities would probably have incentives to increase their caseloads, cut corners on quality, or both. HCFA is not currently studying a PPS for long-term care hospitals. Rather, the administration is proposing that any hospitals that newly qualify for long-term care status be paid under the regular inpatient hospital PPS. Also, HCFA officials told us that the agency plans to recommend in the future a coordinated payment system for post-acute care and that long-term care hospitals are being considered for inclusion under such a payment system. I will discuss the coordinated payment concept later in this statement. The administration has also announced that it will propose requiring SNFs to bill Medicare directly for all services provided to their beneficiary residents except for physician and some practitioner services. We support this proposal as we did in a September 1995 letter to the House Ways and Means Committee. We and the HHS Inspector General have reported on problems, such as overutilization of supplies, that can arise when suppliers bill separately for services for SNF residents. help prevent duplicate billings for supplies and services and billings for services not actually furnished by suppliers. In effect, outside suppliers would have to make arrangements with SNFs under such a provision so that nursing homes would bill for suppliers’ services and would be financially liable and medically responsible for the care. There can be considerable overlap in the types of services provided and the types of beneficiaries that are treated in each of the three post-acute care settings. For example, physical therapy and other rehabilitation services can be provided by a SNF, a home health agency, or a rehabilitation facility. Both HCFA and the prospective payment assessment commission (ProPAC) have noted that the ability to substitute care among post-acute settings may contribute to inappropriate spending growth, even after payment policies are improved for individual provider types.Although prospective payment encourages providers to deliver care more efficiently, facility-specific payments may encourage them to lower their costs by shifting services to other settings. The administration has therefore announced that it will in the future recommend a coordinated payment system for post-acute care services. Such a system will be designed to help ensure that beneficiaries receive quality care in the appropriate settings, and that any patient transfers among settings occur only when medically appropriate rather than in efforts to generate additional revenues. While no details are available about how a coordinated post-acute payment system would operate, presumably it will entail consolidated (bundled) payments to one entity for the different types of providers. In fact, ProPAC has suggested a system that bundles acute and post-acute payments. One of the most important design issues in a bundled payment approach is deciding which provider would receive the payment. Because this provider would have to organize and oversee the continuum of services for beneficiaries, it would bear the risk that payments would not cover costs. Options for this role include an acute-care hospital, a post-acute care provider, or a provider service network. care utilization needs to be accurately predicted to ensure that prospective rates are adequate to cover costs but also give an incentive to provide cost-effective care. Bundling acute and post-acute care would have a number of potential advantages and disadvantages. Optimally, bundling of payments would encourage continuity of care. If, for example, the inpatient hospital has a greater stake in the results, bundling could lead to both better discharge planning as well as improved transfer of information from the hospital to the post-acute provider. Bundling payments to the hospital could also eliminate a PPS hospital’s financial incentive to discharge Medicare patients before they are ready, because patients discharged prematurely may require extensive post-acute services for which the hospital is liable. Furthermore, bundling with an appropriate payment rate would give providers more incentive to furnish the mix of inpatient and posthospital services that yield the least costly treatment of an entire episode of care and thus help control growth in the volume of post-acute services. Finally, to the extent that the bundling arrangement promotes joint accountability, combining responsibility for hospital and post-acute providers could lead to better outcomes. There are a number of potential disadvantages as well. Because bundled payments would represent some level of financial risk, whoever received the bundled payment would need to have the resources to accept the risk. Moreover, bearing risk often gives incentive to shift the risk to others and raises concerns about quality. A key to the success of any bundling system is coordinating care and continuously monitoring a patient during the entire episode. However, some providers might not have the capabilities to do this. For example, if, as ProPAC has suggested, both acute- and post-acute care were bundled and if hospitals received the bundled payment, some hospitals might not have the resources, information, or expertise to properly manage patients’ post-acute care. The same could be said for SNFs and home health agencies. An additional concern is that whoever received the bundled payment could have dominance over the other providers and make choices about acute- and post-acute care settings that are driven primarily by concerns about cost. For example, hospitals might try to maximize their profit by limiting post-acute services or be tempted to screen admissions to avoid patients with high risks of heavy posthospital care. fall into this category. A bundled payment system would not affect home health agency incentives for such patients. Finally, beneficiary advocacy groups have expressed concern about potential harmful effects of this system on patients’ freedom of choice and how the quality and appropriateness of care could be ensured. In conclusion, it is clear from the dramatic cost growth for SNF, home health, and rehabilitation facility care that the current Medicare payment mechanisms for the providers need to be revised. As more details concerning the administration’s or others’ proposals for revising those systems become available, we would be glad to work with the Committee and others to help sort out the potential implications of suggested revisions. This concludes my prepared remarks, and I will be happy to answer any questions. For more information on this testimony, please call William Scanlon on (202) 512-7114 or Thomas Dowdal, Senior Assistant Director, on (202) 512-6588. Other major contributors include Patricia Davis, Roger Hultgren, and Sally Kaplan. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed Medicare's skilled nursing facility (SNF), home health care, and inpatient rehabilitation benefits and the administration's forthcoming legislative proposals related to them. GAO noted that: (1) Medicare's SNF costs have grown primarily because a larger portion of beneficiaries use SNFs than in the past and because of a large increase in the provision of ancillary services; (2) for home health care costs, both the number of beneficiaries and the number of services used by each beneficiary have more than doubled; (3) although the average length of stay has decreased for inpatient rehabilitation facilities, a larger portion of Medicare beneficiaries use them now, which results in cost growth; (4) the administration's major proposals for both SNFs and home health care are designed to to give the providers of these services increased incentives to operate efficiently by moving them from a cost reimbursement to a prospective payment system; (5) what remains unclear about these proposals is whether an appropriate unit of service can be defined for calculating prospective payments and whether the Health Care Financing Administration's data bases are adequate for it to set reasonable rates; (6) administration officials also have discussed their intention to propose in the future a coordinated payment system for post-acute care as methods to give providers efficiency incentives; (7) these concepts have appeal, but GAO has concerns about them similar to those it has for SNF and home health prospective payments; (8) finally, the administration is proposing that SNFs be required to bill for all services provided to their Medicare residents rather than allowing outside suppliers to bill; and (9) this latter proposal has merit because it would make control over the use of ancillary services significantly easier.
We reported in September 2012 that FEMA’s administrative costs had been increasing for all sizes of disasters. According to FEMA, administrative costs include, among other things, the salary and travel costs for its disaster workforce, rent and security expenses associated with establishing and operating its field office facilities, and supplies and information technology support for its deployed staff. In September 2012, based on our analysis of 1,221 small, medium, and large federal disaster declarations during fiscal years 1989 through 2011, we found that the average administrative cost percentage for these disaster declarations doubled from 9 percent in the 1989-to-1995 period to 18 percent in the 2004-to-2011 period, as shown in table 1. We also found that the growth in administrative costs occurred for all types of disaster assistance, including those related to providing Individual Assistance, Public Assistance, and assistance for those disasters that provided both Individual Assistance and Public Assistance.As shown in table 2, since fiscal year 1989, administrative cost percentages doubled for disaster declarations with Individual Assistance only, quadrupled for declarations with Public Assistance only, and doubled for declarations with Public Assistance and Individual Assistance. To address these rising costs, FEMA issued guidelines and targets intended to improve the efficiency of its efforts and to help reduce administrative costs. In November 2010, FEMA issued guidance on how to better control administrative costs associated with disaster declarations. The guide noted that incidents of similar size and type had witnessed growing administrative costs for 20 years, and that, in the past, little emphasis had been placed on controlling overall costs. The document provided guidance on how to set targets for administrative cost percentages, plan staffing levels, time the deployment of staff, and determine whether to use “virtual” field offices instead of physical field offices. However, in September 2012, we found that FEMA did not require that this guidance be followed or targets be met because the agency’s intent was to ensure that it was providing guidance to shape how its leaders in the field think about gaining and sustaining efficiencies in operations rather than to lay out a prescriptive formula. As a result, we concluded that FEMA did not track or monitor whether its cost targets were being used or achieved. In September 2012, we also found that in many cases, FEMA exceeded its cost targets for administrative costs. Specifically, based on our analysis of the 539 disaster declarations during fiscal years 2004 through 2011, we found that 37 percent of the declarations exceeded the 2010 administrative cost percentage targets. Specifically: For small disaster declarations (total obligations of less than $50 million), FEMA’s target range for administrative costs is 12 percent to 20 percent; for the 409 small declarations that we analyzed, 4 out of every 10 had administrative costs that exceeded 20 percent. For medium disaster declarations (total obligations of $50 million to $500 million), the target range for administrative costs is 9 percent to 15 percent; for the 111 declarations that we analyzed, almost 3 out of every 10 had administrative costs that exceeded 15 percent. For large disaster declarations (total obligations greater than $500 million to $5 billion), the target range for administrative costs is 8 percent to 12 percent; for the 19 large declarations that we analyzed, about 4 out of every 10 had administrative costs that exceeded 12 percent. As a result, in September 2012, we recommended that FEMA implement goals for administrative cost percentages and monitor performance to achieve these goals. However, as of July 2014, FEMA had not taken steps to implement our recommendation. In December 2013, FEMA officials stated that they are implementing a system called FEMAStat to, among other things, collect and analyze data on the administrative costs associated with managing disasters to enable managers to better assess performance and progress within the organization. As part of the FEMAStat effort, in 2012 and 2013, FEMA collected and analyzed data on the administrative costs associated with managing disasters. However, as of July 2014, FEMA is still working on systematically collecting the data and utilizing them to develop a model for decision making. As a result, it is too early to assess whether this effort will improve the efficiencies or reduce the cost associated with administering assistance in response to disasters. As part of our ongoing work, we will be reviewing these efforts and working with FEMA to better understand the progress the agency has made in monitoring and controlling its administrative costs associated with delivery of disaster assistance and its efforts to decrease the administrative burden associated with its Public Assistance program. We have also reported on opportunities to strengthen and increase the effectiveness of FEMA’s workforce. More specifically, we previously reported on various FEMA human capital management efforts (as well as human capital management efforts across the federal government) and have made a number of related recommendations for improvement. FEMA has implemented some of these, but others are still underway. Specifically: In June 2011, we found that FEMA’s Strategic Human Capital Plan did not define critical skills and competencies that FEMA would need in the coming years or provide specific strategies and program objectives to motivate, deploy, and retain employees, among other things. As a result, we recommended that FEMA develop a comprehensive workforce plan that identifies agency staffing and skills requirements, addresses turnover and staff vacancies, and analyzes FEMA’s use of contractors. FEMA agreed, and in responding to this recommendation, reported that it had acquired a contractor to conduct an assessment of its workforce to inform the agency’s future workforce planning efforts. In April 2012, we found that FEMA had taken steps to incorporate some strategic management principles into its workforce planning and training efforts but could incorporate additional principles to ensure a more strategic approach is used to address longstanding management challenges. Further, FEMA’s workforce planning and training could be enhanced by establishing lines of authority for these efforts. We also found that FEMA had not developed processes to systematically collect and analyze agencywide workforce and training data that could be used to better inform its decision making. We recommended that FEMA: identify long-term quantifiable mission- critical goals that reflect the agency’s priorities for workforce planning and training; establish a time frame for completing the development of quantifiable performance measures related to workforce planning and training efforts; establish lines of authority for agency-wide workforce planning and training efforts; and develop systematic processes to collect and analyze workforce and training data. DHS concurred with all the recommendations and FEMA is still working to address them. For example, in April 2014, FEMA issued a notice soliciting contracting services for a comprehensive workforce structure analysis for the agency. As part of our ongoing review of FEMA’s workforce management, we are gathering information on FEMA’s other efforts to address our recommendations. In May 2012, we reported on the management and training of FEMA Reservists, a component of FEMA’s workforce, referred to at that time as Disaster Assistance Employees (DAE). Specifically, we found that FEMA did not monitor how the regions implement DAE policies and how DAEs implement disaster policies across regions to ensure consistency. While FEMA’s regional DAE managers were responsible for hiring DAEs, FEMA had not established hiring criteria and had limited salary criteria. Regarding FEMA’s performance appraisal system for DAEs, we found that FEMA did not have criteria for supervisors to assign DAEs satisfactory or unsatisfactory ratings. We also found that FEMA did not have a plan to ensure DAEs receive necessary training and did not track how much of the Disaster Relief Fund was spent on training for DAEs. We recommended, among other things, that FEMA develop a plan for how it will better communicate policies and procedures to DAEs when they are not deployed; establish a mechanism to monitor both its regions’ implementation of DAE policies and DAEs’ implementation of FEMA’s disaster policies; establish standardized criteria for hiring and compensating DAEs; and establish a plan to ensure that DAEs have opportunities to participate in training and are qualified. DHS concurred with the recommendations and FEMA has taken steps to address several of them. For example, in June 2012, FEMA implemented a communication strategy with its reservist workforce that included video conferences, a web blog series, and a FEMA weekly bulletin sent to Reservists’ personal email addresses, among other things. Also, in October 2012, DHS reported that FEMA had resolved the outstanding issues of inconsistent implementation of DAE policies by centralizing control over hiring, training, equipment, and deployment within a single headquarters-based office. FEMA is working to address our other recommendations, and we will continue to monitor its progress. In our March 2013 report, we examined how FEMA’s reservist workforce training compared with training of other similar agencies, and the extent to which FEMA had examined these agencies’ training programs to identify useful practices. We found that FEMA had not examined other agencies’ training programs, and therefore, we recommended that FEMA examine the training practices of other agencies with disaster reservist workforces to identify potentially useful practices; DHS concurred with our recommendation and described plans to address it. As part of our ongoing review, we are gathering information on FEMA’s efforts to address our recommendation. At the request of this committee, we are also currently assessing the impact of workforce management and development provisions in the Post-Katrina Act on FEMA’s response to Hurricane Sandy. We also have plans to conduct additional work to assess the impact of a variety of other emergency management related provisions in the Post-Katrina Act (for example, provisions related to FEMA’s contracting efforts, information technology systems, and disaster relief efforts). Among other things, the Post-Katrina Act directed FEMA to implement efforts to enhance workforce planning and development, standards for deployment capabilities, including credentialing of personnel, and establish a surge capacity force (SCF) to deploy to natural and man-made disasters, including catastrophic incidents. Some of these efforts were highlighted during Hurricane Sandy when FEMA executed one of the largest deployments of personnel in its history. 5 U.S.C. §§ 10101-10106; 6 U.S.C. §§ 414-415. For example, the agency’s response to Hurricane Sandy marked the first activation of the DHS SCF, with nearly 2,400 DHS employees deploying to New York and New Jersey to support response and recovery efforts. The agency also launched the new FEMA Qualification System (FQS) on October 1, 2012, just in time for FEMA employees’ deployment to areas affected by Hurricane Sandy. In 2012, FEMA also created a new disaster assistance workforce component called the FEMA Corps. Forty-two FEMA Corps teams, consisting of approximately 1,100 members, were deployed to support Hurricane Sandy response and recovery efforts in the fall of 2012. FEMA’s deployment of its disaster assistance workforce during the response to Hurricane Sandy revealed a number of challenges and, as a result, FEMA is analyzing its disaster assistance workforce structure to ensure the agency is capable of responding to large and complex incidents, as well as simultaneous disasters and emergencies. For example, FEMA reported that: before deployment for Hurricane Sandy, 28 percent of the staffing positions called for by FEMA’s force structure analysis were vacant (approximately 47 percent of positions required by the force structure were filled with qualified personnel, and the remaining 25 percent were filled by trainees). Deployment of its disaster workforce nearly exhausted the number of available personnel. By November 12, 2012, FEMA had only 355 Reservists (5 percent) available for potential deployment: 4,708 (67 percent) were already deployed to ongoing disasters, and 1,854 (26 percent) were unavailable. Its plans had not fully considered how to balance a large deployment of personnel and still maintain day-to-day operations. As part of our ongoing work, we will be evaluating FEMA’s efforts to address the challenges identified during the agency’s response to Hurricane Sandy and assessing their impact. We will also determine what progress the agency has made in its workforce planning and development efforts. In March 2011, we reported on another area of opportunity for FEMA to increase the efficiency of its operations—the management of its preparedness grants. We found that FEMA could benefit from examining its grant programs and coordinating its application process to eliminate or reduce redundancy among grant recipients and program purposes. As we again reported in February 2012, four of FEMA’s largest preparedness grants (Urban Areas Security Initiative, State Homeland Security Program, Port Security Grant Program, and Transit Security Grant Program) which have similar goals, fund similar types of projects, and are awarded in many of the same urban areas, have application review processes that are not coordinated. In March 2014 in our annual update to our duplication and cost savings work in GAO’s Online Action Tracker, we reported that FEMA has attempted to capture more robust data from grantees during applications for the Port Security Grant Program and the Transit Security Grant Program—because applicants provide project-level data. However, applications for the State Homeland Security Grant Program and Urban Areas Security Initiative do not contain enough detail to allow for the coordinated review across the four grants, according to FEMA officials. FEMA intends to begin collecting and analyzing additional project-level data using a new system called the Non-Disaster Grants Management System (NDGrants). However, FEMA officials said that implementation of NDGrants had been delayed until 2016 because of reduced funding. While implementing NDGrants should help FEMA strengthen the administration and oversight of its grant programs, a report released by the DHS Office of Inspector General (OIG) in May 2014 identified a number of information control system deficiencies associated with FEMA development and deployment of the NDGrants system that could limit the usefulness of the system. Specifically, the OIG reported NDGrants system deficiencies related to security management, access control, and configuration management. According to the OIG’s report, DHS management concurred with the findings and recommendations in the report and plans to work with component management to address these issues. We will continue to monitor FEMA’s implementation of the system as part of our annual update for our duplication and cost savings work. FEMA has proposed, through the President’s budget requests to Congress, to consolidate its preparedness grant programs to streamline the grant application process, responding to a recommendation we made in March 2011 by eliminating the need to coordinate application reviews. GAO, Government Operations: Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue GAO-11-318SP, (Washington, D.C.: Mar 1, 2011.) committees, however, expressed concern that the consolidation plan lacked detail, and the NPGP was not approved for either fiscal year 2013 or 2014. Nonetheless, FEMA again proposed the NPGP consolidation approach for 2015 providing additional details such as clarification and revised language relating to governance structures under the proposed program. In responding to questions submitted by the House Committee on Homeland Security’s Subcommittee on Emergency Preparedness, Response and Communications in April 2014, FEMA officials reported that the NPGP would help increase the efficiency of preparedness grants by requiring fewer grants notices for staff to issue and fewer grants to award, and reduce processing time and monitoring trips due to the reduction in the number of grantees. If approved in the future, and depending on its final form and execution, we believe a consolidated NPGP could help reduce redundancies and mitigate the potential for unnecessary duplication and is consistent with our prior recommendation. Chairman Begich, Ranking Member Paul, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. If you or your staff members have any questions about this testimony, please contact me at (404) 679-1875 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Christopher Keisling, Assistant Director; Aditi Archer, Andrew Berglund, Jeffrey Fiore, Michelle R. Su, Tracey King, David Alexander, and Jessica Orr made contributions to this testimony. Disaster Resilience: Actions Are Underway, but Federal Fiscal Exposure Highlights the Need for Continued Attention to Longstanding Challenges. GAO-14-603T. Washington, D.C.: May 14, 2014. Extreme Weather Events: Limiting Federal Fiscal Exposure and Increasing the Nation’s Resilience. GAO-14-364T. Washington, D.C.: February 12, 2014. National Preparedness: FEMA Has Made Progress, but Additional Steps Are Needed to Improve Grant Management and Assess Capabilities. GAO-13-637T. Washington, D.C.: June 25, 2013. FEMA Reservists: Training Could Benefit from Examination of Practices at Other Agencies. GAO-13-250R. Washington, D.C.: March 22, 2013. National Preparedness: FEMA Has Made Progress in Improving Grant Management and Assessing Capabilities, but Challenges Remain. GAO-13-456T. Washington, D.C.: March 19, 2013. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013. Federal Disaster Assistance: Improved Criteria Needed to Assess a Jurisdiction’s Capability to Respond and Recover on Its Own. GAO-12-838. Washington, D.C.: September 12, 2012. Disaster Assistance Workforce: FEMA Could Enhance Human Capital Management and Training. GAO-12-538. Washington, D.C.: May 25, 2012. Federal Emergency Management Agency: Workforce Planning and Training Could Be Enhanced by Incorporating Strategic Management Principles. GAO-12-487. Washington, D.C.: April 26, 2012. Homeland Security: DHS Needs Better Project Information and Coordination among Four Overlapping Grant Programs. GAO-12-303. Washington, D.C.: February 28, 2012. More Efficient and Effective Government: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-449T. Washington, D.C.: February 28, 2012. Government Operations: Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. FEMA: Action Needed to Improve Administration of the National Flood Insurance Program. GAO-11-297. Washington, D.C.: June 9, 2011 Government Operations: Actions Taken to Implement the Post-Katrina Emergency Management Reform Act of 2006. GAO-09-59R. Washington, D.C.: November 21, 2008. Natural Hazard Mitigation: Various Mitigation Efforts Exist, but Federal Efforts Do Not Provide a Comprehensive Strategic Framework. GAO-07-403. Washington, D.C.: August 22, 2007. High Risk Series: GAO’s High-Risk Program. GAO-06-497T. Washington, D.C.: March 15, 2006. Disaster Assistance: Information on the Cost-Effectiveness of Hazard Mitigation Projects. GAO/T-RCED-99-106. Washington, D.C.: March 4, 1999. Disaster Assistance: Information on Federal Disaster Mitigation Efforts. GAO/T-RCED-98-67. Washington, D.C.: January 28, 1998. Disaster Assistance: Information on Expenditures and Proposals to Improve Effectiveness and Reduce Future Costs. GAO/T-RCED-95-140. Washington, D.C.: March 16, 1995. Federal Disaster Assistance: What Should the Policy Be? PAD-80-39. Washington, D.C.: June 16, 1980. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Preparing for, responding to, and recovering from disasters is becoming increasingly complex and costly. GAO reported that from fiscal years 2002 through 2013, the federal government appropriated about $41 billion for preparedness grant programs and $6.2 billion to FEMA's Disaster Relief Fund in fiscal year 2014. In addition, FEMA obligated over $80 billion in federal disaster assistance for major disasters declared from fiscal years 2004 through 2011 and responded to more disasters than in any other year in its history during fiscal year 2011. The larger number and size of disasters has required increasingly complex and costly FEMA operations and processes to prepare for and respond to these events. For example, Hurricane Sandy in September 2012 required one of the largest deployment of disaster personnel in FEMA's history. Similarly, FEMA's own administrative costs—such as the cost to house and deploy its disaster personnel—have also increased. This testimony discusses GAO's work on opportunities to enhance efficiencies in FEMA's operations in three areas: (1) disaster administrative costs, (2) workforce management, and (3) preparedness grant management. This testimony is based on previous GAO reports issued from 2008 to 2014 with selected updates and preliminary observations from GAO's ongoing work on disaster administrative costs and workforce management issues in response to Hurricane Sandy. GAO's recent and ongoing work examining the Federal Emergency Management Agency's (FEMA) administrative costs of providing disaster assistance highlights opportunities to increase efficiencies and potentially reduce these costs. In September 2012, GAO reported that FEMA's administrative costs for disaster assistance had doubled in size as a percentage of the overall cost of the disasters since fiscal year 1989, and often surpassed its targets for controlling administrative costs. GAO also concluded that FEMA's administrative costs were increasing for all sizes of disasters and for all types of disaster assistance. FEMA issued guidelines intended to improve the efficiency of its efforts and to help reduce administrative costs. However, FEMA did not make this guidance mandatory because it wanted to allow for flexibility in responding to a variety of disaster situations. In 2012, GAO recommended that the FEMA Administrator implement goals for administrative cost percentages and monitor performance to achieve these goals. However, as of June 2014, FEMA had not taken steps to implement GAO's recommendation. GAO's ongoing work indicates that FEMA is implementing a new system to, among other things, collect and analyze data on the administrative costs associated with managing disasters to enable managers to better assess performance. However, according to officials, FEMA is still working on systematically collecting the data. As a result, it is too early to assess whether this effort will improve efficiencies or reduce administrative costs. GAO has also reported on opportunities to strengthen and increase the effectiveness of FEMA's workforce management. Specifically, GAO reviewed FEMA human capital management efforts in 2012 and 2013 and has made a number of related recommendations, many of which FEMA has implemented; some of which are still underway. For example, GAO recommended that FEMA identify long-term quantifiable mission-critical goals and establish a time frame for completing the development of quantifiable performance measures for workforce planning and training, establish lines of authority for agency-wide efforts related to workforce planning and training, and develop systematic processes to collect and analyze workforce and training data. FEMA concurred and is still working to address these recommendations. For example, FEMA's deployment of its disaster assistance workforce during the response to Hurricane Sandy revealed a number of challenges. In response, according to agency officials, FEMA is, among other things, analyzing its disaster assistance workforce structure to ensure the agency is capable of responding to large and complex incidents. GAO will continue to evaluate these efforts to assess their effectiveness. In March 2011, GAO reported that FEMA could enhance the coordination of application reviews of grant projects across four of the largest preparedness grants (Urban Areas Security Initiative, State Homeland Security Program, Port Security Grant Program, and Transit Security Grant Program) which have similar goals, fund similar types of projects, and are awarded in many of the same urban areas. GAO recommended that FEMA coordinate the grant application process to reduce the potential for duplication. FEMA has attempted to use data to coordinate two programs and also proposed to consolidate its preparedness grant programs, but FEMA's data system has been delayed, and Congress did not approve FEMA's consolidation proposal for either fiscal year 2013 or 2014.
The Telecommunications Act of 1996 sets forth the nation’s goals for providing affordable telecommunications services to consumers nationwide, particularly to populations such as individuals living in rural, isolated, or high-cost areas, or those with low incomes; schools and libraries; and rural health care facilities. The act instructed FCC to establish a universal service support mechanism to ensure that eligible schools and libraries have affordable access to and use of certain telecommunications services for educational purposes. In addition, Congress authorized FCC to “establish competitively neutral rules to enhance, to the extent technically feasible and economically reasonable, access to advanced telecommunications and information services for all public and nonprofit elementary and secondary school classrooms . . . and libraries. . . .” Based on this direction, and following the recommendations of the Federal-State Joint Board on Universal Service, FCC established the Schools and Libraries Universal Service Support Mechanism, commonly referred to as the E-rate program. FCC designated USAC to carry out the day-to-day activities of the program, which is funded from statutorily mandated payments to the Universal Service Fund. FCC oversees USAC and the program through rule-making proceedings, enforcement actions, audits of participants, and reviews of funding decision appeals from participants. FCC also reviews USAC’s procedures, including its process for reviewing applications for funding; meets frequently with USAC staff; and provides guidance letters to USAC. A memorandum of understanding between FCC and USAC, first executed in June 2007 and updated in September 2008, as well as FCC orders and rules, set forth the roles and responsibilities of the two parties in the management, oversight, and administration of the program. The E-rate program provides schools, school districts, libraries, and consortia with discounts on telecommunications services, Internet access, and data transmission wiring and components used for educational purposes—that is, activities that are integral, immediate, or proximate to the education of students or to the provision of services to library patrons, such as activities that occur on library or school property. Based on indicators of need, eligible schools and libraries qualify for a discount of 20 percent to 90 percent on the cost of services and must show that they can pay for the undiscounted portion of services. Indicators of need include the percentage of students eligible for free or reduced-price lunches through the National School Lunch Program and whether the entity is located in a rural area. Table 1 shows the discount percentages entities are eligible for based on these indicators. Eligible entities may apply annually for program support. Based on the broad direction in the act, FCC defined two general types of services that are eligible for E-rate discounts: Priority 1 services, which include telecommunications services, such as local, long-distance, and wireless (e.g., cellular) telephone services, as well as data links (e.g., T-1 lines) and Internet access services, such as Web hosting and e-mail services—all of which receive priority for funding under FCC’s rules; and Priority 2 services, which include cabling, components, routers, switches, and network servers that are necessary to transport information to individual classrooms, public rooms in a library, or eligible administrative areas, as well as basic maintenance of internal connections, such as the repair and upkeep of eligible hardware and basic technical support. Lists of specific eligible services, including the conditions under which they are eligible, are updated annually by USAC, finalized by FCC after a public comment period, and posted on USAC’s Web site. Items ineligible for E-rate discounts include, among other things, end-user products and services such as Internet content, Web site content maintenance fees, end- user personal computers, and end-user software. All eligible and properly completed requests for Priority 1 services are funded up to the available amount of funding. Priority 2 services, herein referred to as internal connections, are funded with what remains after commitments have been made for all approved requests for Priority 1 services in a given year. Requests for internal connections services are prioritized by the discount level of the applicant, with funding going first to applicants with the highest discount level—90 percent—and then to applicants at each descending discount level until the funding is exhausted; in 2007, for example, internal connections funding was provided to applicants with discount levels down to 81 percent. Because of this prioritization, available funding may be exhausted before all eligible and properly completed requests for internal connections are funded. According to FCC, the rules of priority equitably provide the greatest assurance of support to schools and libraries with the greatest level of economic disadvantage. The rules ensure that all applicants filing during a time period specified by USAC receive at least some support in the event that the amounts requested for support exceed the total support available in a funding year. The steps applicants must carry out to obtain program support—including the application, review, invoicing, and reimbursement processes—are illustrated in figure 1. This figure is followed by a more detailed description of each of these steps. Prior to submitting an application for E-rate support, an applicant must complete several steps, including the following: Prepare a technology plan. The applicant conducts a technology assessment and develops a technology plan to ensure that any services it obtains will be used effectively and that it can provide for the nondiscounted portion of services as well as for the goods or services that are ineligible for E-rate funding. Open competitive bidding. The applicant identifies products and services needed to implement its technology plan and submits a form to USAC describing the desired products and services. USAC posts completed forms on its Web site so that service providers can view and consider bidding on these requests. To participate in the E-rate program, service providers must obtain identification numbers from USAC and certify compliance with program rules in each year that they provide services under the program. Select a service provider and enter into a service agreement. At least 28 days after the applicant’s description of requested services is posted to USAC’s Web site, an applicant may enter into an agreement with a provider of eligible services. After completing these steps, the applicant submits its application for program support to USAC. USAC accepts applications during a filing window, the exact dates for which change somewhat each year but are generally from November to Feburary. The information the applicant provides on this form, includes, but is not limited to, the following: the discount percentage to which the applicant is entitled, calculated using a worksheet provided on the application; detailed information about each requested service or product and its cost—some services have both eligible and ineligible components, in which case the applicant must calculate the portion of the service eligible for an E-rate discount, a process referred to as cost allocation; and certifications that, among other things, the applicant has adequately budgeted for the undiscounted portion of services, as well as related ineligible services—such as computers, training, software, and electrical capacity—needed to make effective use of the services ordered. USAC reviews requests for funding to determine whether applicants have properly complied with program rules and requirements; this process is known as the program integrity assurance (PIA) review. Reviewers may ask applicants to submit additional information, such as verification of a contract award date or enrollment and income data for newly constructed schools. Some applications undergo “selective” reviews, which require more detailed documentation that the applicant has complied with the rules. Applicants are chosen for selective review based on defined criteria to test compliance with specific FCC rules. Additionally, applicants that fail selective review in a given year must go through selective review the following year. Based on the outcome of the application review, USAC issues funding commitment decision letters stating how much funding the applicant may receive based on eligible services provided within the funding year deadlines. Funding commitments are conditional upon applicants meeting additional requirements as described later. Funding commitments may be for the full amount requested or less than the amount requested, or funding may be denied entirely for reasons such as competitive bidding violations or requests for ineligible services. Funding requests for Priority 2 services may also be denied if the applicant’s discount percentage falls below the annual discount percentage threshold for internal connections. After eligible services have been delivered, service providers or applicants submit invoices to USAC to request reimbursement for the discounted portion of services. Before seeking reimbursement for the discounted portion of services from USAC, applicants must confirm (1) that the services are planned to be or are being provided; (2) approval of their technology plans by a state or other authorized body, if required; and (3) compliance with the Children’s Internet Protection Act (CIPA) and the Neighborhood Children’s Internet Protection Act, if required. Under these acts, if required, schools and libraries receiving support for Internet access, internal connections, or basic maintenance must certify that they have in place certain Internet safety policies and technology protection measures. Funding requests for telecommunications services do not require certification of CIPA compliance. Service providers may apply the discount rate to the applicant’s bill before sending it to the applicant, in which case the applicant pays only the nondiscounted portion and the service provider invoices USAC directly to obtain reimbursement. Alternatively, applicants may pay for services in full and submit a form to USAC to request reimbursement. Regardless of which invoicing method is used, USAC reviews the invoices and disburses payments for universal service support to service providers; under the latter method, service providers remit the discounted amount to the applicant. To ensure compliance with FCC rules, both USAC and FCC’s Office of Inspector General periodically select a sample of participants to audit and conduct site visits of beneficiaries. Each year from 1998 through 2007, the amount of funding applicants requested exceeded the amount available, but the amounts requested have generally declined since 2002, with most of the decline driven by fewer requests for Priority 2 services—the wiring and components needed for data transmission. Although requests for Priority 1 services—that is, telecommunications and Internet access—have remained roughly level since 2002, commitments have increased, at least in part, because applicants received a greater proportion of the funds they requested. The increasing amounts committed for Priority 1 services has the effect of decreasing the amounts available for Priority 2 services, which are funded only after all eligible Priority 1 services requests are satisfied. Regarding disbursements, a significant proportion of committed funds are not paid out to beneficiaries. Funding that is not disbursed in the year for which it was committed is carried over to the next funding year and made available for new commitments, but undisbursed funding is still problematic because it prevents some applicants from receiving funding in a given year. From 1998 through 2007, applicants requested a total of about $41 billion in E-rate funding—174 percent of the $23.4 billion in program funding available during that time. Further, in each of these years, the amounts requested exceeded the amounts available. However, the amounts requested have generally declined since 2002. Figure 2 shows the annual funding levels and the amount of E-rate funding requested for Priority 1 services and Priority 2 services for each year from 1998 through 2007. Since 2002, the number of applicants and the amounts requested for Priority 1 services have been stable. As figure 3 shows, the number of applicants for telecommunications services has been generally stable since 1998, and the number of applicants for Internet access, after showing an increase the first few years of the program, has been roughly level since 2002. From 2002 through 2007, the amounts requested for Priority 1 services have also been relatively stable. In contrast, the number of applicants and amounts requested for Priority 2 services have declined for the past several years, accounting for most of the decline in the overall amount of funding requested. The amount of funding requested for Priority 2 services declined 50 percent from 2002 through 2007. (See fig. 4.) The ratio of funding requested for Priority 2 services to that requested for Priority 1 services has also shifted significantly. From 1998 through 2002, funding for Priority 2 services was sought at a rate of 2.3 to 1 over funding for Priority 1 services, while from 2003 through 2007, this ratio was 1.4 to 1. The following factors may help explain the decrease in requests for Priority 2 services (internal connections). In 1999, the second funding year, Priority 2 requests were funded down to the 20 percent discount level, which means that all eligible requests could be funded. As a result, according to USAC, many entities with lower discounts applied the following years, hoping that the Priority 2 cutoff point would be similarly low; this is consistent with the dramatic increase in the number of applicants and amount requested in 2000. But the cutoff point in the following 3 years was in the 80 percent range, and, as a result, according to USAC officials, there was a gradual drop- off in Priority 2 requests from entities with lower discounts. According to FCC, entities with low discount levels stopped applying for Priority 2 funding because they knew that their requests would not receive funding. A second factor is an FCC rule implemented in 2005 that limited applicants’ receipt of Priority 2 funding to 2 out of every 5 years, reducing the number of applicants for these services in a given year. FCC adopted this rule in order to make funding for internal connections available to more eligible schools and libraries on a regular basis. The emphasis on Priority 1 services is likely to continue, according to our analysis of survey responses on future information technology goals. We asked respondents about a number of goals related to telephone and Internet connectivity and equipment needed to make use of such connectivity; items in the survey included both those eligible for E-rate discounts and those not eligible. Our analysis of responses to this question shows that participants are somewhat more focused on goals related to maintaining existing information technology services than on those related to adding new capabilities. For instance, we estimate that providing telephone services is a goal for 96 percent of participants and providing access to the Internet is a goal for 91 percent to 94 percent of participants; in contrast, installing or upgrading wiring and components needed for Internet or network access is a goal for 73 percent to 74 percent of participants. Similarly, when we asked what participants’ highest-priority information technology goals were, the E-rate-eligible expenses cited most often were providing (1) telephone services, (2) additional bandwidth to locations already equipped with Internet access, and (3) Internet access for student or library patron use. According to our analysis of survey responses, the highest-priority goal of participants is increasing the number of or replacing existing computers for student or library patron use but the E-rate program does not cover either. While the E-rate program’s statutory purpose is to help schools and libraries obtain advanced telecommunication services, it is not clear whether the growing emphasis on Priority 1 services and the corresponding decline in emphasis on Priority 2 services represent the most efficient and effective use of the program resources. As the Office of Management and Budget (OMB) noted in a 2005 assessment of the E- rate program, given the increase in schools’ and libraries’ level of Internet connectivity, it is no longer clear that the program serves an existing need. Similarly, it is difficult to determine whether the program’s funding structure—including the priority rules and the discount matrix, which contributes to the trends in funding—is the best way to distribute funding in a manner consistent with the program’s intent. As we discuss below, FCC does not have specific, outcome-oriented performance goals or long- term goals for the program, and therefore the agency does not have a basis on which to determine whether the growing emphasis on Priority 1 services is appropriate. FCC’s rule-making proceeding on universal service reform, which is discussed in more detail in the following section, has been ongoing since 2005, but FCC has not made a determination—either as part of this proceeding or otherwise—as to what changes, if any, should be made to the overall structure of the program to better achieve the goals of the act. Similar to the trend in funding requests, funding commitments also show a growing emphasis on Priority 1 services. During the first years of the program, more funding was committed for Priority 2 services than for Priority 1 services, but this trend reversed in 2004 and continued through 2007, as figure 5 shows. Although commitments for Priority 2 services increased in 2007, they were outweighed by commitments for Priority 1 services by 64 percent. From 1999 through 2007, the amounts committed annually for telecommunications services increased each year for a total increase of 79 percent, and the amounts committed annually for Internet access nearly doubled. The increase in amounts committed for Priority 1 services is a result of individual applicants receiving a greater proportion of the funding they request, and not a result of an increasing number of requests because, as noted earlier, the number of requests for these services has not been growing at a substantial rate. As figure 6 shows, the proportion of requested funding that applicants receive as a commitment has been increasing, with about half of applicants receiving 75 percent or more of the amount they requested in 2000 and almost 80 percent of applicants receiving 75 percent or more in 2007. In addition to the proportion of dollars committed in each of these service categories, the proportion of participants that receive commitments in these categories is important, particularly when considering whether funds are being targeted appropriately. Based on our survey, we estimate that 99 percent of participants have used E-rate to pay for telephone services, and around 75 percent have used E-rate to pay for access to the Internet, whereas 36 percent to 38 percent of participants have used E-rate to install or upgrade wired internal connections and 20 percent to 24 percent have used it to install or upgrade wireless internal connections. USAC stated that it is likely that the lower usage levels for internal connections are due to the inherent limitations that the funding cap places on access to Priority 2 funding. The increasing success of applicants requesting Priority 1 services has implications for the amount of funding available in future years for Priority 2 services and, accordingly, for how FCC manages the E-rate program and whether the program’s existing structure is still suitable to best meet the current technology needs of schools and libraries. From 2002 through 2007, requests for Priority 1 services averaged 69 percent of available funding; if a substantially higher proportion of such requests had been funded, a smaller percentage of funding would have been available for Priority 2 requests. As it was, from 1998 through 2007, after eligible requests for Priority 1 services were satisfied, only about one- third of all requests for Priority 2 services were able to be funded. Without clearly defined, long-term goals, as well as specific short-term goals, FCC lacks a basis for determining if allocating funding in this manner is appropriate. Of the $19.5 billion in E-rate funding committed to schools and libraries between 1998 and 2006, $5.0 billion—more than one-quarter—was not disbursed. Starting in 2003, funds that were not disbursed in a given year were carried forward to subsequent years to be available for commitment, thereby increasing the amount that could be committed beyond the $2.25 billion cap. The amount of committed funding not used includes $2.6 billion for Priority 1 services and $2.4 billion Priority 2 services. Figure 7 shows the percentage of committed funding that was disbursed in each service category each year from 1998 through 2006. Underuse of committed funding is widespread among participants, but the proportion of participants using a higher percentage of the funds committed to them is rising. Thirty-five percent of participants in 2006 received disbursements for less than 75 percent of the funds that were committed to them, including 9 percent that did not receive any disbursement, but the percentage of participants receiving disbursements for 75 percent or more of their committed funds increased each year from 2001 to 2006 (see fig. 8). Similarly, the proportion of participants that did not receive any disbursement has been declining since 1999. Nonetheless, the overall disbursement rate has not increased because applicants that ultimately do not receive disbursements equal to their funding commitment are receiving relatively larger commitments. For a number of reasons, participants may not use the full amount of their funding commitment: Participants’ expenditures are less than the amounts applied for. Applicants may overestimate costs for Priority 1 services, such as telephone bills and Internet access charges, to ensure sufficient funding for the year. Based on our survey, we found that lower-than- projected costs of Priority 1 services was a major reason for not using all committed funds for an estimated 54 percent of participants and a minor reason for an additional 20 percent of participants. State E-rate officials we met with noted that there is no disincentive to “aim high” in the amount of funding requested. Additionally, some of these officials told us that participants may have planned for Priority 2 services projects or upgrades, but changes in their circumstances resulted in project delays or cancellations. For example, in the time between applying for funding and receiving the commitment decision, the local funds needed for the project may no longer be available or the construction of a new school, including the installation of network wiring, could be delayed. Participants do not seek reimbursement for the full amount of E-rate eligible expenses because of the complexity of paperwork and lack of staff expertise. According to state E-rate officials we spoke with, bills and invoices from service providers are complicated, and participants do not always identify all items eligible for reimbursement in part because it can be unclear which items are eligible and which are not, particularly for Priority 1 services. Moreover, participants are commonly dealing with multiple applications covering 2 or 3 funding years at once. This complexity makes billing even more complicated because it can be difficult to determine which year’s funding commitment is associated with which bill. In addition, school and library staff responsible for E-rate administrative tasks face challenges associated with turnover and availability. Based on our survey, we estimate that about one-quarter of the individuals at schools and libraries responsible for E-rate-related tasks have 3 years or less experience with the E-rate program. Several state E-rate officials we met with said that when new employees take over for someone who has left, they may not know it is necessary to apply for reimbursements for the prior year’s commitments. These officials also noted that most individuals who are responsible for E-rate tasks have other primary job responsibilities, and E-rate is not their first priority. USAC officials also identified a number of factors that can affect the timeliness with which disbursements are made: Priority 2 projects do not have to be completed within the funding year and are subject to a variety of automatic extensions for delivery of service, as well as extensions that can be requested by the applicant. Larger funding requests may take USAC longer to review, which results in a later funding decision and later installation of the project by the applicant. Moreover, larger projects take longer for the applicant to complete than smaller projects. Invoice reviews for larger projects may take longer for USAC to complete. In some instances where heightened scrutiny on applications or law enforcement action is involved, disbursements may be held up for a period of time while the issues are resolved. As a result of these conditions, according to FCC and USAC officials, earlier funding years have incrementally more funds disbursed than more recent years. Finally, USAC noted that some committed funds are also not disbursed because when a service provider or beneficiary submits invoices for payment, USAC may identify services or uses that are not eligible for reimbursement. We reported in 2000 on the issue of undisbursed funds in the E-rate program, recommending that FCC take steps to identify factors affecting the rate at which funds are disbursed, and to address these factors. The following actions were taken in response to this recommendation: FCC and USAC agreed to commit funds above the $2.25 billion cap and used this approach between August 2001 and September 2004. FCC then determined that the Antideficiency Act applies to the Universal Service Fund. Once this determination was made, funding commitments were considered obligations for the purposes of the act, and therefore USAC could no longer commit funds above the cap without larger budgetary resources being made available. In 2003, FCC amended its rules to allow unused funds from prior funding years to be carried forward on an annual basis and be available for commitment the next funding year; previously, these funds were used to reduce the amounts telecommunications companies were required to pay into the Universal Service Fund. FCC has carried forward funding several times since this change was made, including $650 million in June 2007 and $600 million in June 2008. FCC implemented new policies to provide applicants with flexibilities that were intended to facilitate the use of funds, such as the ability to change service providers or modify the services originally requested. FCC and USAC established new deadlines for notification of the receipt of services. Additionally, FCC required USAC to file quarterly estimates of unused funds from prior funding years when it submits its projection of demand for E-rate funds for the upcoming quarter. According to FCC, the estimates are used solely to determine how much funding to carry over. Despite these changes, the proportion of disbursed funds is now lower, on average, than it was when we made the recommendation. The proportion of committed funds that were disbursed from 1998 through 2000 averaged 79 percent but averaged only 72 percent annually from 2001 through 2006 (see fig. 9). Unused funding is problematic because it has the potential to reduce the number of participants that will receive commitments for Priority 2 services in a given year, even when unused funds are carried over to subsequent years. Because Priority 1 services always receive priority for funding, commitments for Priority 2 services may or may not be made, based on the level of commitments for first-priority services. Carrying over unused funds may result in more funding for Priority 2 services requests, but only if commitments for Priority 1 services remain stable or decline, neither of which is the current trend. Thus, some applicants for Priority 2 services, who would receive funding if aggregate requests and commitments were more consistent with actual disbursements, do not receive funding in the current environment. We recently reported on the long-standing problem of unused funds in federal grant programs, and although the E-rate program is not technically a grant program, it has features in common with grant programs that make some degree of comparison appropriate. Our report noted that unused balances in expired grant accounts, which may be caused by poorly timed communications with grantees, are noteworthy because they can hinder the achievement of program objectives. We found that when agencies made concerted efforts to address the problem, they were able to decrease the amount of undisbursed funding in expired grant accounts. The overall participation rate among E-rate-eligible entities is about 63 percent, with public schools participating at a substantially higher rate than private schools and libraries, based on 2005 data. We found that a key circumstance influencing nonparticipation was the burdensome nature of program participation. Among eligible entities that do participate in the program, our survey results show that program participation is generally viewed as becoming easier but that several program requirements are still difficult to complete, particularly those related to the application for funding. Moreover, we found that a substantial amount of funding is denied because applicants do not correctly carry out application procedures. In recent years, FCC and USAC have made changes intended to ease the process of participation for eligible schools and libraries, but the primary focus of FCC remains the prevention and detection of waste, fraud, and abuse in the program. The participation rate among all types of E-rate-eligible entities is about 63 percent, based on 2005 data. Public schools—which, at more than 100,000, constitute the largest group of eligible entities—have an overall participation rate of 83 percent, but different types of public schools participate at different rates. Magnet schools participate at a higher than average rate (90 percent), and charter schools, vocational schools, and special education schools all participate at lower than average rates (37 percent, 52 percent, 41 percent, respectively). Private schools have a participation rate of 13 percent, and library systems and library branches participate at a rate of 51 percent and 31 percent, respectively. Figure 10 provides the number of participants and participation rates for public and private schools and library branches and systems. In terms of characteristics, we found that participating public schools have a higher proportion of students eligible for the national school lunch program, averaging 45 percent, compared with nonparticipating public schools’ average of 36 percent. Participating private schools have more students per teacher (14.1) than nonparticipating private schools (11), but this ratio for public schools—16—is the same regardless of E-rate participation. Participating libraries tend to have more resources than those that do not participate. For example, on average, participating library systems have 18 full-time, paid staff members and operating revenues of $1.28 million, compared with nonparticipants’ average staff size of 11 and operating revenue of $816,000. Participating library systems also have larger service area populations, averaging about 37,000, than nonparticipating library systems, which average just under 26,000. GAO-09-254SP, an electronic supplement to this report, provides additional details on the differences between participating and nonparticipating groups. Through our analysis of responses from survey respondents who had not participated in the program every year, our interviews with a selection of nonparticipating schools and libraries, and information obtained from beneficiary stakeholder groups, we identified a number of circumstances that influence nonparticipation. These circumstances may not be applicable for all entities that do not participate, but they provide some insight into issues that some nonparticipants are facing, particularly the following: Burdensome nature of program participation. Among the six nonparticipants we spoke with and the comments we received from survey respondents, the predominant reason for nonparticipation was that the application process is too complex, takes too much time, or requires too many resources. Four of the six nonparticipants we spoke with—two libraries and two public schools—cited the difficult or cumbersome nature of the application process as a reason for not participating. Among 28 survey respondents who responded to an open-ended question on reasons for nonparticipation, 5 stated that the program is too complicated or difficult or that their staff did not have enough time for the required tasks; another 4 stated that the amount of time required to participate in the program was not worth the return. Another 5 respondents said they intended to apply for funding but missed an application deadline. Additionally, a 2007 survey of public libraries by the American Library Association (ALA) estimated that 38 percent of libraries did not participate in E-rate because the application process is too complicated. Internet filtering requirements. Public libraries may be reluctant to participate in E-rate because of the requirement that recipients of Internet access or Priority 2 funding install Internet content filters in accordance with the Children’s Internet Protection Act. Both of the nonparticipating libraries we spoke with cited this as a reason for nonparticipation, and ALA, based on responses to its survey, estimates that 34 percent of libraries do not apply for E-rate because of this requirement. One library official we spoke with said that Internet filters inhibit access to free and open communication. Additionally, according to this official, if adult users want to access blocked information, library workers have to take the time to manually turn filters off and then back on, which creates an administrative burden. Inability to prove discount percentage. As discussed previously, the primary mechanism participants use for calculating their discount rate is student eligibility for the National School Lunch Program. However, some private schools do not participate in this program and therefore use an alternative FCC-approved method, such as surveying families of children who attend the school to determine the family’s income. One nonparticipating private school we spoke with said it had been unable to collect this information because families may consider the information personal or sensitive and be reluctant to provide it. According to USAC officials, without this information, applicants are entitled to receive the lowest discount rate under the program—20 percent. Representatives of the National Association of Independent Schools also noted the inability to prove discount percentages as one of the main reasons why private schools do not participate in E-rate. FCC has been aware for some years that a portion of eligible entities do not participate in the E-rate program. For example, in conducting research for our December 2000 report on the E-rate program, we learned from FCC officials that they had finalized a new performance plan for the E-rate program that included tactical goals for increasing participation by urban low-income school districts and rural school districts, as well as rural libraries and libraries serving small areas, all of which had below-average participation rates. During our 2005 review of the E-rate program, when we asked FCC officials about the plan, we were told that it had not been implemented and that none of the FCC staff currently working on E-rate were familiar with the plan. Most recently, FCC, in its 2007 report and order on the Universal Service Fund, directed USAC to contact a sample of the economically disadvantaged schools and libraries that have not participated in the E- rate program, determine why these schools and libraries do not participate, and assist them, if necessary, at the beginning of the application process. Although USAC has stepped up its outreach efforts, it has not taken steps specifically to target and assist nonparticipants. Among the schools and libraries that received E-rate funding, we found that many believe program participation has generally gotten easier, rather than harder. We asked survey respondents whether they found participating in the E-rate program easier, more difficult, or about the same as in 2005. Of those who had been participating in the program since that time, we estimate that 15 percent find the program more difficult, with the remaining respondents evenly split between finding the program easier and finding it to be about the same. (See fig. 11.) While relatively few participants believe that the program has become more difficult, some aspects of the program pose difficulties for participants. When we asked survey respondents about the ease or difficulty of specific aspects of program participation, we identified nine program elements that one-third or more of participants consider to be very or somewhat difficult, as shown in figure 12. Several of the program elements identified by participants as difficult relate to the application process, including preparing the technology plan and allocating costs by use or location. Notably, a program element found to be among the most difficult was the overall process of preparing the application for funding. The question we asked on the overall process encompassed such elements as determining the eligibility of products and services and complying with competitive bidding requirements. Participating in the selective review process was found to be difficult by 39 percent of the participants who had experience with this process. Additionally, of participants who had knowledge or experience with preparing appeals of funding decisions, 53 percent found this aspect of the program to be difficult, making it the most difficult element, according to our survey responses. A number of survey respondents provided comments indicating that although individual program elements may not be overly difficult or time consuming, having such a large number of requirements to fulfill causes difficulty. Comments included the following: “ very complex, lots of steps, lots of time lines to keep track of. A very labor intensive process.” “I can’t stress enough the amount of time that it takes to do E-rate work. It is not just the applications, but the work-load required for the reviews has been very high.” “E-rate has been making the process simpler and easier to file the past few years. It is still a very exhausting process to ensure everything has been done correctly. It seems no matter what precautions you take, errors still exist.” Also illustrative of the extent to which applicants have difficulty navigating program rules is the rate of funding denials due to applicant error. Each year, applicant errors account for the denial of a substantial amount of E-rate funding. Of the approximately $33 billion in funding that was requested between 1998 and 2007 but that did not result in a funding commitment, about 23 percent was denied because applicants did not correctly carry out application procedures. However, the proportion of funding denied due to applicant error declined from 31 percent in 2002 to 4 percent in 2007. FCC’s Office of Inspector General has examined participant noncompliance with program rules and, in 2008, reported that such noncompliance puts the E-rate program at risk of significant improper payments. The Inspector General audited funding requests from 260 E- rate participants that received funding in 2007 and found that two of the most frequently identified types of noncompliance resulting in improper payments were disregarding FCC program rules and inadequate documentation USAC noted that in the cases of noncompliance attributable to lack of documentation, there may not be actual noncompliance with the requirements. A number of resources exist to help participants successfully complete program requirements, both through USAC and through other sources. We asked survey respondents about the usefulness of a number of these resources, as shown in figure 13. The resource most frequently cited as useful was paid consultants, with an estimated 79 percent of respondents who expressed an opinion viewing this resource as very or extremely useful. We also found, however, that the use of a consultant had little impact on the reported ease or difficulty of completing required program elements. Other resources found to be most useful were state E-rate coordinators (67 percent), USAC’s help desk (61 percent), and USAC’s training seminars (57 percent). Notably, for two of the resources applicants found most useful—USAC’s help desk and state E-rate coordinators—a substantial percentage of participants—25 percent and 31 percent respectively—responded “do not know/do not use,” indicating that sufficient outreach may not have been made to inform applicants of these resources. In recent years, FCC and USAC have made changes intended to ease the process of participation for eligible schools and libraries, including the following: In response to FCC’s finding that a significant number of applications for E-rate funding were being denied for administrative, clerical, or procedural errors, FCC adopted the Bishop Perry order in May 2006, which stated that USAC must provide all E-rate applicants with an opportunity to cure clerical errors and errors related to FCC rules and orders in their applications. FCC and USAC officials told us that the increased outreach between application reviewers and applicants that the order directed has made a substantial difference in the rate of funding denials. USAC has increased beneficiary education and outreach efforts by, for example, increasing the number and location of training sessions it provides each fall. USAC officials also told us that they intend to increase the number of staff members dedicated to applicant outreach. USAC has increased the number of program elements that can be completed online and has revamped the templates that application reviewers use to communicate with applicants to make them easier for individuals without technical backgrounds to understand. FCC issued a Notice of Proposed Rulemaking (NPRM) in 2005 to broadly examine many aspects of the Universal Service Fund, including the E-rate program. Although this proceeding has been continuing for well more than 3 years, some matters remain open, including ways to improve the administration of the application process for E-rate funding. It is unclear when FCC will take final action on these matters. We reviewed comments submitted in response to the NPRM’s request for possible program improvements to determine what improvements were most commonly suggested and why. We then included questions in our survey to obtain participants’ views on whether they favor or oppose these improvements. Figure 14 shows the changes that were strongly or somewhat favored by more than half of participants. The improvements most favored by participants—each was favored by more than four out of five participants—are as follows: Enable applicants to go online to update their applications, make service substitutions, correct service provider identification numbers, change providers, and cancel or reduce funding. Streamline the application for Priority 1 services. Allow the use of a multiyear application for Priority 1 services. Establish set dates for submitting the application form. FCC continues to consider comprehensive Universal Service Fund (USF) reform proposals raised in, or in response to, the 2005 NPRM, including ways to simplify the E-rate program; additionally, FCC issued a notice of inquiry in September 2008 to obtain new and refined information from commenters on how to strengthen the management, administration, and oversight of USF programs. FCC officials told us in November 2008 that while they will consider commenters’ suggestions on streamlining the E- rate program, they cannot simplify the program if doing so would weaken internal controls aimed at preventing and detecting waste, fraud, and abuse. We agree that FCC should not simplify the program at the expense of a robust system of internal controls but continue to believe that FCC needs to take action to improve the program rather than simply continuing to gather data. Since our first recom mendation in 1998 that,cordance with the Government Perform in ac (GPRA), FCC esta ance and Results Act of 1993 performance goals and measures for the E-rate program, FCC has taken steps in this direction but still has not successfully met this fundamental requirement of resu on lts-o our past findings related omme measures, our rec riented managemen t. Tabl to the E-rate program’s p o ndations, and FCC’s resp e 2 provides details erformance goals and nses. In its August 2007 order, FCC adopted two types of performance measures for the E-rate program—one for Internet connectivity and the other for application processing. This order required that USAC measure, and report to FCC annually, data from program participants on broadband connections provided to program participants, including the numbe hese buildings served by broadband services and the bandwidth of t services. According to FCC, the collection of these data—once analyzed collectively—will allow the agency to determine how the E-rate program can better meet the needs of applicants. With respect to performance measures for application processing, the order required that USAC collect and annually report to FCC, performance data on a number of specific output measures, including the number of applicants served and the discount rate they received, and average dollar amount awarded perfunding request number. A memorandum of understanding between FC and USAC also requires USAC to report to FCC on performance data relative to funding applications. Subsequent to the 2007 order, FC determined that the performance measuremen part, as one element of a performance-based evaluation and compensation program for USAC’s executives. FCC officials told us that these data would also be used to publicly demonstrate USAC’s performance in implementing the E-rate program. t data would be used, in In addition to the performance measurements specific to the E-rate program, the 2007 order sets forth performance measures applicable to th administration of the Universal Service Fund programs, including the accuracy of billing and disbursements, administrative costs, and the amount of improper payments that are recovered, among other th memorandum of understanding further describes the data USAC is to collect related to these administrative performance measures and additionally describes service quality performance measures. While FCC’s efforts to develop performance measures have the potential to eventually produce better information than is currently available E-rate program’s performance, these measures fall short when compare with the key characteristics of successful performance measures. In pas our pert work, we have found that agencies that are successful in measuring addformance strive to establish measures that demonstrate results, infress important aspects of program performance, and provide usefulormation for decision making. Following is a discussion of these characteristics and the extent to which FCC has fulfilled them in developing measures for E-rate performance. Measures should be tied to goals and demonstrate the degree to which the desired results are achieved. These program goals should in turn be linked to the overall agency goals. However, the measures that FCC has adopted are not based on such linkage because the agency does not currently have performance goals for the E-rate program. By establishing performance measures before establishing the specific goals it seeks to achieve through the E-rate program, FCC may waste valuable time and resources collecting the wrong data and, consequently, not develop the most appropriate measures for results. Measures should address important aspects of program performan For each program goal, a few performance measures should be selected that cover key performance dimensions and take different priorities into account. For example, limiting measures to core program activities enables managers and other stakeholders to as accomplishments, make decisions, realign processes, and assign accountability without having an excess of data that could obscure rather than clarify performance issues. Also, performance measures should cover key governmentwide priorities—such as quality, timeliness, and customer satisfaction. The two types of performance measures that FCC adopted appear to address certain key performancece. dimensions—particularly because the connectivity measure centers onthe program’s statutory goal of providing eligible schools and librari es with access to advanced telecommunications services, and, by selecting just two types of measures, there are fewer chances of obscuring the most important performance issues. The measures a appear to take into account such priorities as timeliness and customer satisfaction. However, again, without first setting specific performance goals for the E-rate program, FCC cannot be sure it has adopted the most appropriate performance measures. Measures should provide useful information for decision making. Performance measures should provide managers with timely, action- oriented information in a format that helps them make decisions that improve program performance. According to FCC officials, the application-processing data that FCC is currently requiring USAC to collect will be used in making compensation decisions for USAC executives, and it will also be available in USAC’s annual report to provide the general public with information on E-rate’s performance in this regard. However, the application-processing data are output, not outcome, oriented, and the intended uses of the data do not include such program-management activities as allocating resources or adopting new program approaches if needed. The limited use of these data, combined with the absence of specific program goals, raises concern about the effectiveness of these performance measures. In the Telecommunications Act of 1996, Congress said that access to advanced telecommunications for schools and libraries was one of the principles for the preservation and advancement of universal service. In managing the E-rate program, FCC has been guided by how well the program meets the broad, overarching goal of universal service, rather than strategic goals specific to the E-rate program. After 12 years of program operations and committing more than $20 billion in funding awards, FCC has not developed adequate performance goals and measures for the E-rate program. Because we have repeatedly identified the lack of adequate performance goals and measures as a weakness in the E-rate program, we reiterate our 2005 recommendation that FCC define annual, outcome-oriented performance goals for the E-rate program that are linked to the overarching goal of providing universal service. Moreover, we have identified several trends that raise questions about the direction of the program. Is it in the national interest, in an increasingly broadband- oriented world, that a substantial and growing portion of commitments is for telecommunications services such as local and cellular telephone service? Does the program’s high participation rate among public schools, but lower participation rate among private schools and libraries, lead to an acceptable distribution of E-rate funding among eligible entities? Without a strategic vision for the program, and accompanying performance goals and measures, it is difficult for FCC to make informed decisions about the future of the program and more effectively target available funding. Additionally, we have previously identified the program’s low disbursement rate as an area of concern. In response to a previous recommendation, FCC took steps to increase the disbursement rate. However, we found that the disbursement rate has not increased and a substantial amount of committed funding is not disbursed. In the current E-rate environment, where requests for funding consistently exceed the annual funding cap, many applicants seeking support for Priority 2 services are denied funding, yet a significant amount of funding committed to applicants is not disbursed. If applications and commitments more closely tracked disbursements—that is, if the disbursement rate were higher—some applicants who were denied funding might have received funding for internal connection projects. Moreover, in light of the nation’s current fiscal constraints, it is appropriate to make the most effective possible use of available E-rate funding by minimizing the amounts of committed funds that are not disbursed. To better provide a foundation for effective management of the E-rate program and to ensure that program funds are used efficiently and in a manner to support desired program outcomes, we recommend that the Federal Communications Commission take the following two actions: Review the purpose and structure of the E-rate program and prepare a report to the appropriate congressional committees identifying FCC’s strategic vision for the program; this report should include the program’s long-term goals, whether the vision can be achieved using the existing program structure (e.g., the priority rules and discount matrix), and whether legislative or regulatory changes are necessary. Provide information in its annual performance plan on the amount of undisbursed funding associated with commitments that have expired and why these funds were not disbursed, and the actions taken to reduce the amount of undisbursed funding and the outcomes associated with these actions. We provided a draft of this report to FCC and USAC for their review and comment. FCC and USAC provided technical comments that we incorporated, where appropriate. In its comments, FCC reiterated the status of its performance goals and measures, as well as the disbursement rate. In particular, FCC noted that it identified goals for the E-rate program when it first adopted the program and recently requested comment on establishing new goals. Further, FCC noted that its performance measures do, or will, meet the three characteristics of successful measures that we identified. We acknowledge the efforts FCC has made to date and recognize the successes of the E-rate program that FCC identified in its letter. However, these efforts are not consistent with successful performance goals and measures. For example, agencies should establish explicit performance goals and measures, use intermediate goals and measures to illustrate progress, and identify projected target levels of performance for multiyear goals. Establishing effective performance goals and measures will help FCC guide the E-rate program. Finally, FCC noted that it has taken action to address the disbursement rate and that our analysis may inaccurately portray a decrease in the disbursement rate since disbursements typically occur over several years and therefore the disbursement rate in the first few years after commitments are made will be lower than in later years. We agree that the disbursement rate in the first few years could be lower than in later years. However, the disbursement rate for every funding year (including 2001 through 2004) remains less than the rate in 2000, when we made our initial recommendation on this issue. Thus, we modified the text to note that the disbursement rate remains low but is not necessarily decreasing. FCC’s full comments and our responses appear in appendix II. In its comments, USAC noted that it stands ready to work with FCC in developing and reporting additional performance goals and measures. USAC also noted that it is aware of committed funds going unused but that funding years 2005, 2006, and 2007 remain open, implying that the disbursement rate could increase. It further noted that the gap between commitments and disbursements is attributable more to the structure of the program than to USAC’s administration of the program. We agree that the disbursement rate associated with commitments made in 2005 through 2007 may increase, but, as mentioned above, the disbursement rate for every funding year remains less than the rate in 2000, when we made our initial recommendation. Lastly, USAC noted that it intends to evaluate the participant survey data to determine whether it can devise strategies to improve program participation. USAC’s full comments and our responses appear in appendix III. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Chairman of the Federal Communications Commission, and the Chairman of the Universal Service Administrative Company. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. Our objectives were to address the following questions: (1) What are key trends in the demand for and use of E-rate funding and what are the implications of these trends? (2) To what extent do eligible entities apply for E-rate funds, how well do applicants navigate the E-rate program’s requirements, and what steps is the Federal Communications Commission (FCC) taking to facilitate program participation? (3) What are FCC’s performance goals and measures for the E-rate program, and how do they compare to key characteristics of successful goals and measures? The following sections describe the various procedures we undertook to answer these objectives. In addition, we conducted the following background research that helped inform each of our reporting objectives. We reviewed prior GAO reports on E-rate, FCC’s Universal Service Monitoring Reports on the E-rate program, and documentation from FCC and the Universal Service Administrative Company (USAC) on the structure and operation of the E-rate program. We interviewed officials from FCC’s Office of Managing Director, Office of Inspector General, and Wireline Competition Bureau to identify actions undertaken to address previously identified problems and plans to address issues of concern in the program; and officials from USAC’s Schools and Libraries Division, Office of General Counsel, and Office of Finance to collect information on program operations and USAC’s actions to implement prior FCC orders on E-rate. We also interviewed representatives of E-rate stakeholder groups, including the U.S. Bureau of Indian Education, the Council of Great City Schools, the National Association of Independent Schools, the American Library Association, the Education and Library Networks Coalition, the State E-rate Coordinators Alliance, and the E-rate Service Provider Association, as well as individual school districts, libraries, and telecommunications companies. To determine trends in the demand for and use of E-rate funding, we obtained data from the Streamlined Tracking and Application Review System (STARS), which is used to process applications for funding and track information collected during the application review process. When analyzing and reporting on the data we took the limitations on how data can be manipulated and retrieved from STARS into consideration since this system was designed to process applications and not to be a data retrieval system. We assessed the reliability of the data by questioning officials about controls on access to the system and data back-up procedures; additionally, we reviewed the data sets provided to us for obvious errors and inconsistencies. Based on this assessment, we determined that the data were sufficiently reliable to describe broad trends in the demand for and use of E-rate funding. We obtained the following data—including annual and cumulative figures—for funding years 1998 through 2007:the number and characteristics of applicants, including their entity type, discount level, and location; dollar amounts of funding requests, commitments, denials, and disbursements, by service category; dollar amounts of individual funding requests, commitments, and disbursements, for each applicant for each funding year; and reasons accounting for funding requests not being granted, by dollar amount and by service category. In order to provide these data, USAC’s subcontractor, Solix, performed queries on the system and provided the resulting reports to us between April 2008 and December 2008. Data from the STARS system can change on a daily basis as USAC processes applications for funding and reimbursement, applicants request adjustments to requested or committed amounts, and other actions are taken. As a result, the data we obtained and reported on reflect the amounts at the time that Solix produced the data and may be somewhat different if we were to perform the same analyses with data produced at a later date. For the purposes of analyzing and reporting on the amounts of funding for telecommunications services, Internet access, and internal connections that were requested, committed, and disbursed, we collapsed the six service categories in USAC’s database into three categories, as shown in table 3. To obtain information on how well E-rate beneficiaries navigate the program’s requirements and procedures, the extent to which they use funds committed to them, and their views on how to improve the program, we conducted a Web-based survey of schools and libraries that participate in the E-rate program (see GAO-09-254SP). To develop the survey questionnaire, we reviewed existing studies about the program, including previous and ongoing GAO work, and interviewed stakeholder groups knowledgeable about the program and issues of concern to beneficiaries. We designed draft questionnaires in close collaboration with a GAO social science survey specialist. We conducted pretests with eight E-rate participants representing different types of applicants—schools, school districts, and libraries—and from rural and urban areas, to help further refine our questions, develop new questions, and clarify any ambiguous portions of the survey. We conducted these pretests in person and by telephone. We drew our survey sample from Form 471 applications for funding year 2006 that received a commitment greater than zero dollars. For each such application, we obtained data from USAC that included the following: billed entity number (a USAC-assigned, unique identifier assigned to each applicant); entity type; whether the entity is located in an urban, rural, or mixed area; the amount of funds committed for Priority 1 services; and the amount of funds committed for internal connections. Based on these data, we created three stratification variables: Entity type. The four categories of entities eligible to apply for funding are school districts, schools, libraries, and educational consortia. Educational consortia—which can be made up of any combination of schools, school districts, and libraries—constituted less than 2.5 percent of the applications in the data set we received and were treated as out of scope for this survey. The remaining three entity types were therefore used for our first stratification variable. Urban/rural status. Applicants must report whether they are located in a rural or urban area because this information is used to determine their discount level. Three percent of the applications were for entities located in a mixed urban-rural area; we excluded this category of applications as out of scope for this survey. The remaining cases were divided between urban and rural for our second stratification variable. Priority level of funding. The third stratification variable was the priority level for the funding commitments made for each application. To control the funding priority in our sample, we divided the applications into those associated with beneficiaries that requested (1) only Priority 1 services funding, (2) only internal connections funding, and (3) both. We combined funding priority categories (2) and (3) for analysis of survey results. Analysis of the application data that we received revealed that many beneficiaries filed more than one application with USAC for funding year 2006 and received funding commitments for these requests. As a result, since our sample design was based on applications and not entities, some entities had more than one application selected. We sent these entities only one survey and weighted their responses accordingly. The number of applications in each of our sample strata and the sample size are shown in table 4. We used a proportional allocation to assign sample units to strata with an adjustment for strata that had small populations. If the proportional allocation was less than 20, we used either the total number of applications in the stratum if it was less than 20 or set the sample allocation at 20. The stratum sample sizes for our survey were determined to provide a 4 percent overall precision for an attribute measure at the 95 percent level of confidence. Our goal was to survey individuals who were responsible for completing E-rate-related tasks—such as preparing forms and responding to information requests—for each sampled entity. Our data set included the name and contact information for the individual listed as the contact on Form 471; we sent these individuals the survey. Because some entities employ a consultant to fill out their application and others use a regional or state official who is responsible for multiple entities’ applications, our sample included different entities that shared the same contact person. We contacted these individuals to identify an alternate entity-specific contact to receive the survey. If no such alternate could be found, the original contact was sent one survey for each sample entity. We contacted such individuals to make arrangements for them to fill out the questions that pertained to all applications only once, then separately obtained the application-specific information for each of their surveys. A total of 697 individuals received questionnaires for our sample of 722 Form 471 E-rate applications. The results from our sample are weighted to reflect the population of beneficiaries that use the E-rate program. We launched our Web-based survey on April 21, 2008, and closed the survey to responses on June 18, 2008. Log-in information was e-mailed to all sampled participants. We sent up to three follow-up e-mail messages to nonrespondents over the next 4 weeks. We then contacted by telephone those who had not completed the questionnaire. We received responses for 543 questionnaires, for an overall response rate of 78 percent. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 4 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. In addition to sampling errors, the practical difficulties of conducting any survey may introduce nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages to minimize such nonsampling errors. As indicated above, we collaborated with a GAO social science survey specialist to design draft questionnaires, and versions of the questionnaire were pretested with eight members of the surveyed population. In addition, we provided a draft of the questionnaire to FCC and USAC for their review and comment. From these pretests and reviews, we made revisions as necessary. We examined the survey results and performed computer analyses to identify inconsistencies and other indications of error. A second, independent analyst checked the accuracy of all computer analyses. To determine the percentage of eligible entities that participate in the E- rate program and the characteristics of program participants and nonparticipants, we performed a matching analysis using data from the Department of Education and USAC. We obtained three databases from the U.S. Department of Education’s National Center for Education Statistics (NCES): Common Core of Data (CCD). CCD is a program of NCES that annually collects data about all public schools, public school districts, and state education agencies in the United States. We used the most recent complete data set for individual public schools, which was for the 2005-2006 school year. Private School Universe Survey (PSS). The target population for PSS consists of all private schools in the United States that meet the NCES definition of private schools. Data from the 2005-2006 school year were used. Public Libraries Survey (PLS). PLS is designed as a universe survey and provides a national census of public libraries and their public service outlets, as well as data on these entities. Data from 2005 were used. We assessed the reliability of these data sets by (1) reviewing NCES’s technical and methodological reports on these studies and (2) examining the data for obvious inconsistencies. We determined that the data were sufficiently reliable to use as sources of summary statistics about program participants and nonparticipants. We also used data from USAC’s STARS system for the 2005 funding year. USAC provided us with data on each entity that was included on applications for the 2005 funding year. We received two files from USAC— one for schools and one for libraries—that included the entity’s name, NCES identification number, address, city, state, and ZIP code. The school file also included information on whether each school was public or private; we used this information to separate public from private schools. We assessed the reliability of the STARS data system as discussed previously. Additionally, we examined the data set that we obtained for matching purposes to identify inconsistencies or obvious errors. We found that some of the data fields were not fully completed. For example, there were a number of records with missing data, incomplete data, and incorrect NCES identification numbers. However, we concluded that the incomplete nature of some of the records did not significantly affect our intended purpose of identifying program participants, and we therefore determined that the data were sufficiently reliable. We matched USAC’s public school data against CCD, USAC’s private school data against PSS, and USAC’s library data against the PLS from NCES. To identify which entities in the NCES data sets were E-rate participants, we used SAS, a statistical software application, to compare USAC records with NCES records, matching first on identification numbers, then on combinations of entity names, states, cities, ZIP codes and street addresses. When this procedure could find no exact match, we used an SAS function that measures asymmetric spelling distance between words (SPEDIS), to determine the likelihood that entity names from the two data sets did match and to generate possible pairs of matching entities. The possible matches for an entity were written to a spreadsheet, which we reviewed manually to select the best possible match. For both computerized and manual matches, we assessed a random sample of the matches to calculate error rates for the analysis. Based upon our sample results, we estimate the error rate for matching records between the USAC and the Department of Education’s databases as 1.7 percentage points. Unless otherwise noted, all of the percentage estimates cited in the report, which are based upon matching of entity records, have an overall error rate of 3.4 percentage points or less at the 95 percent level of confidence. Having identified whether each entity in the NCES data sets participated in the E-rate program, we then ran summary statistics on data fields of interest for the groups of participants and nonparticipants. To better understand why eligible entities do not participate in the E-rate program, we obtained anecdotal, nongeneralizeable information through interviews with six nonparticipants. These entities included library systems, public school districts, and private schools and were located in both urban and rural areas. We identified nonparticipant interviewees by asking the State E-rate Coordinators Alliance for the names of schools that they knew did not participate and by searching in USAC’s online database of program participants for entities that were not listed has having applied for funding. We asked interviewees about their reasons for not participating in the E-rate program, potential future changes to the program that could result in their participaton, and sources of funding that they use to pay for information technology and telecommunications expenses. We reviewed the Telecommunications Act of 1996 to determine what the performance goals and measures for the E-rate program are and how the measures compare to key characteristics of successful performance measures. We then reviewed our past products and other literature on results-oriented management and effective practices for setting performance goals and measures. We compared this information to the program goals and measures that FCC set forth in agency documentation—including an order, proposed rulemaking, strategic plan, and performance and accountability reports. We also reviewed the Office of Management and Budget’s Program Assessment Rating Tool 2003 report on the E-rate program’s effectiveness and its 2007 update to this report. In addition, we interviewed officials from FCC’s Wireline Competition Bureau, Office of Managing Director, and Office of Inspector General, and officials from USAC to obtain their views on and plans to implement E- rate performance goals and measures. We conducted this performance audit from July 2007 to March 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following are GAO’s comments on the Federal Communications Commission’s letter dated March 10, 2009. 1. We acknowledge the efforts that FCC has made to develop performance goals; however, the goals FCC identified are not consistent with successful performance goals. For example, agencies should establish explicit performance goals and measures, use intermediate goals and measures to illustrate progress, and identify projected target levels of performance for multiyear goals. 2. We are not suggesting that the E-rate program may no longer serve an existing need; this was the conclusion of the Office of Management and Budget (OMB). Rather, we note that without performance goals, FCC does not have a basis on which to determine whether the growing emphasis on Priority 1 services is appropriate. We cite OMB’s conclusion to emphasize that effective performance goals would help FCC guide the E-rate program. 3. We agree that FCC’s performance measures address one characteristic of successful measures—measures should address important aspects of program performance. However, FCC’s measures do not currently meet the remaining two characteristics, as FCC noted that the measures “will be tied to goals” and “will provide useful information to decision-making.” 4. We agree that the disbursement rate for more-recent funding years may increase due to applicants seeking extensions, which can take time to resolve. As a result, we modified the report to note that the disbursement rate remains low but is not necessarily decreasing. However, the disbursement rate for every funding year, including 2001 through 2004, remains less than the rate in 2000 when we made our initial recommendation to address the low disbursement rate. The following are GAO’s comments on the Universal Service Administrative Company’s letter dated March 6, 2009. 1. We agree that the disbursement rate for funding years 2005, 2006, and 2007 may increase as applicants receive delivery of services and submit invoices. However, the disbursement rate for every funding year, including 2001 through 2004, remains less than the rate in 2000, when we made our initial recommendation to address the low disbursement rate. 2. We are not suggesting that USAC’s administration of the E-rate program is a significant contributing factor to the low disbursement rate. Rather, we identify several factors that appear to contribute to the low disbursement rate, including the incentives inherent in the program. For example, we note that under current program rules, applicants have an incentive to overestimate costs for Priority 1 services. These and other factors that we identify in the report likely contribute to the low disbursement rate. In addition to the contact named above, Michael Clements and Faye Morrison, Assistant Directors; Eli Albagli; Carl Barden; Jennifer Clayborne; Elizabeth Curda; Abe Dymond; Elizabeth Eisenstadt; Michele Fejfar; Simon Galed; Heather Halliwell; Kristen Jones; Ying Long; John Mingus; Josh Ormond; Betty Ward-Zukerman; Mindi Weisenbloom; and Crystal Wesco made key contributions to this report. Telecommunications: Greater Involvement Needed by FCC in the Management and Oversight of the E-Rate Program. GAO-05-151. Washington, D.C.: February 9, 2005. Schools and Libraries Program: Update on E-rate Funding. GAO-01-672. Washington, D.C.: May 11, 2001. Schools and Libraries Program: Update on State-Level Funding by Category of Service. GAO-01-673. Washington, D.C.: May 11, 2001. Schools and Libraries Program: Application and Invoice Review Procedures Need Strengthening. GAO-01-105. Washington, D.C.: December 15, 2000. Schools and Libraries Program: Actions Taken to Improve Operational Procedures Prior to Committing Funds. GAO/RCED-99-51. Washington, D.C.: March 5, 1999. Telecommunications and Information Technology: Federal Programs That Can Be Used to Fund Technology for Schools and Libraries. GAO/T-HEHS-98-246. Washington, D.C.: September 16, 1998. Schools and Libraries Corporation: Actions Needed to Strengthen Program Integrity Operations before Committing Funds. GAO/T-RCED-98-243. Washington, D.C.: July 16, 1998. Telecommunications: Court Challenges to FCC’s Universal Service Order and Federal Support for Telecommunications for Schools and Libraries. GAO/RCED/OGC-98-172R. Washington, D.C.: May 7, 1998. Telecommunications: FCC Lacked Authority to Create Corporations to Administer Universal Service Programs. GAO/T-RCED/OGC-98-84. Washington, D.C.: March 31, 1998.
The Federal Communications Commission's (FCC) Schools and Libraries Universal Service Support Mechanism--also known as the E-rate program--is a significant source of federal funding for information technology for schools and libraries, providing about $2 billion a year. As requested, GAO assessed issues related to the E-rate program's long-term goals, including (1) key trends in the demand for and use of E-rate funding and the implications of these trends; (2) the rate of program participation, participants' views on requirements, and FCC's actions to facilitate participation; and (3) FCC's performance goals and measures for the program and how they compare to key characteristics of successful goals and measures. To perform this work, GAO analyzed data going back to the first year of the program, surveyed a sample of participating schools and libraries, reviewed agency documents, and interviewed agency officials and program stakeholders. Requests for E-rate funding consistently exceed the annual funding cap, and increased commitments for telecommunications and Internet services, combined with significant undisbursed funds, limit funding for wiring and components needed for data transmission. Although still exceeding available funds, total amounts requested have generally declined since 2002, largely due to declining requests for wiring and components. Funding commitments in recent years reflect this trend, with the amount of funding for wiring and components outweighed by funding for telecommunications services and Internet access. In addition, a significant amount of committed funds are not disbursed to program participants; for commitments made in 1998 through 2006, about one-quarter of the funds have not been disbursed. Unused funds are reallocated for use in future years but are still problematic because they preclude other applicants from being funded. Participation rates and participants' views on program requirements indicate difficulties in the E-rate application process, which FCC and the Universal Service Administrative Company (USAC)--the program's administrator--are taking steps to address. The participation rate among the more than 150,000 eligible schools and libraries is about 63 percent, but participation rates among groups vary, from 83 percent among public schools to 13 percent among private schools. According to nonparticipants, a key circumstance influencing nonparticipation is the complexity of program requirements, even though participants reported that participation is becoming easier. Still, E-rate program data show that some funding is denied because applicants do not correctly carry out application procedures. In recent years, FCC and USAC have made changes intended to ease the process of participation for schools and libraries, such as giving applicants an opportunity to correct clerical errors in their applications. FCC officials said they will consider further changes to facilitate participation, but their primary interest is in protecting funds from improper use. FCC does not have performance goals for the E-rate program, and its performance measures are inadequate. In 1998, GAO first recommended that FCC develop specific performance goals and measures for the E-rate program in accordance with the Government Performance and Results Act of 1993. FCC set forth specific goals and measures for some of the intervening years, but it does not currently have performance goals in place. Further, the performance measures it adopted in 2007 lack key characteristics of successful performance measures, such as being tied to program goals. Performance goals and measures are particularly important for the E-rate program, as they could help FCC make well-informed decisions about how to address trends in request for and use of funds. Without them, FCC is limited in its ability to efficiently identify and address problems with the E-rate program and better target funding to highest-priority uses. FCC's piecemeal approach to performance goals and measures indicates a lack of a strategic vision for the program.
The Comptroller General recognized that he needed to shift the emphasis of the then Office of Civil Rights from a reactive, complaint processing focus to a more proactive, integrated approach. He wanted to create a work environment where differences are valued and all employees are offered the opportunity to reach their full potential and maximize their contributions to the agency’s mission. In 2001, the Comptroller General changed the name of the Office of Civil Rights to the Office of Opportunity and Inclusiveness and gave the office responsibility for creating a fair and inclusive work environment by incorporating diversity principles in GAO’s strategic plan and throughout our human capital policies. Along with this new strategic mission, the Comptroller General changed organizational alignment of the Office of Opportunity and Inclusiveness by having the office report directly to him. Also, in 2001, I was selected as the first Managing Director of the Office of Opportunity and Inclusiveness. The Office of Opportunity and Inclusiveness (O&I) is the principal adviser to the Comptroller General on diversity and equal opportunity matters. The office manages GAO’s Equal Employment Opportunity (EEO) program, including informal precomplaint counseling, and GAO’s formal discrimination complaint process. We also operate the agency’s early resolution and mediation program by helping managers and employees resolve workplace disputes and EEO concerns without resorting to the formal process. In addition, O&I monitors the implementation of GAO’s disability policy and oversees the management of GAO’s interpreting service for our deaf and hard-of-hearing employees. But effective efforts to create a diverse, fair, and inclusive work place require much more. In furtherance of a more proactive approach, O&I monitors, evaluates, and recommends changes to GAO’s major human capital policies and processes including those related to recruiting, hiring, performance management, promotion, awards, and training. These reviews are generally conducted before final decisions are made in an effort to provide reasonable assurance that GAO’s human capital processes and practices promote fairness and support a diverse workforce. Throughout the year, O&I actively promotes diversity throughout GAO. For example, last year we met with the summer interns to discuss their experiences and to provide guidance on steps that interns can take to enhance their chances for successful conversion to permanent employment at GAO. We also took steps to increase retention of our entry- level staff by counseling our Professional Development Program advisers on the importance of consistent and appropriate training opportunities and job assignments that afford all staff the opportunity to demonstrate all of GAO’s competencies. I also made several presentations that reinforced the agency’s strategic commitment to diversity, including a panel discussion on diversity in the workforce, a presentation to new Band II analysts on the importance of promoting an environment that is fair and unbiased and that values opportunity and inclusiveness for all staff, and a presentation to Senior Executive Service (SES) managers on leading practices for maintaining diversity, focusing on top leadership commitment and ways that managers can communicate that commitment and hold staff accountable for results. This proactive and integrated approach to promoting inclusiveness and addressing diversity issues differs from my experience as Director of the Office of Civil Rights at a major executive branch agency. As Director of that office, a position I held immediately before coming to GAO, I had little direct authority to affect human capital decisions before they were implemented, even though those decisions could adversely affect protected groups within the agency. For the most part, my role was to focus on the required barrier analysis and planning process. The problem with this approach is that agencies generally make just enough of an effort to meet the minimal requirements of the plan developed by this process. In addition to these plans, diversity principles should be built into every major human capital initiative, along with effective monitoring and oversight functions. The war for talent, especially given increasing competition with the private sector, has made it more competitive for GAO and other federal agencies to attract and retain top talent. Graduates of color from our nation’s top colleges and universities have an ever increasing array of career options. In response to this challenge, GAO has taken a variety of steps to attract a diverse pool of top candidates. We have identified a group of colleges and universities that have demonstrated overall superior academic quality, and either have a particular program or a high concentration of minority students. They include several Historically Black Colleges and Universities, Hispanic-serving institutions, and institutions with a significant portion of Asian-American students. In addition, GAO has established partnerships with professional organizations and associations with members from groups that traditionally have been underrepresented in the federal workforce, such as the American Association of Hispanic CPAs, the National Association of Black Accountants, the Federal Asian Pacific American Council, the Association of Latino Professionals in Finance and Accounting, and the American Association of Women Accountants. GAO’s recruiting materials reflect the diversity of our workforce, and we annually train our campus recruiters on the best practices for identifying a broad spectrum of diverse candidates. GAO’s student intern program serves as a critically important pipeline for attracting high-quality candidates to GAO. In order to maximize the diversity of our summer interns, O&I reviews all preliminary student intern offers to ensure that the intern hiring is consistent with the agency’s strategic commitment to maintaining a diverse workforce. O&I also meets with a significant percentage of our interns in order to get their perspectives on the fairness of GAO’s work environment. Moreover, our office recently analyzed the operation of the summer intern program and the conversion process and identified areas for improvement. GAO is implementing changes to address these areas, including taking steps to better ensure consistency in the interns’ experiences and to improve the processes for evaluating their performance and making decisions about permanent job offers. Competency-based performance management systems are extremely complex. It is important to implement safeguards to monitor implementation of such systems. As a way to ensure accountability and promote transparency, the Comptroller General made an unprecedented decision to disseminate performance rating and promotion data. Over some objections, the Comptroller General agreed to place appraisal and promotion data by race, gender, age, disability, veteran status, location, and pay band on the GAO intranet and made this information available to all GAO staff. This approach allows all managers and staff to monitor the implementation of our competency-based performance management systems and serves as an important safeguard in relation to the processes. As far as I am aware, no other federal agency has ever done this, nor am I aware of any major corporation in America that has taken such an action. The Comptroller General rejected the argument that an increased litigation risk should drive the agency away from disseminating this information. Instead he stood by his position that the principles of accountability and transparency dictated that we should make this data available to all GAO employees. In addition to making this data available to all GAO staff, O&I and the Human Capital Office conduct separate and independent reviews of each performance appraisal and promotion cycle before ratings and promotions are final. In conducting its review of performance appraisals, O&I uses a two-part approach; we review statistical data on performance ratings by demographic group within each unit, and where appropriate, we conduct assessments of individual ratings. In conducting the individual assessments we (a) examine each individual rating within the specific protected group; (b) review the adequacy of any written justification; (c) determine whether GAO’s guidance on applying the standards for each of the performance competencies has been consistently followed, to the extent possible; and (d) compare the rating with the self-assessment to identify the extent to which there are differences. I meet with team managing directors to resolve any concerns we have after our review. In some instances ratings are changed, and in other cases we obtain additional information that addresses our concerns. Our promotion process review entails analyzing all recommended best- qualified (BQ) lists. We review each applicant’s performance ratings for the last three years. In addition, we also review each applicant’s supervisory experience. I discuss concerns about an applicant’s placement with the relevant panel chair. I then meet with the Chief Operating Officer and the Chief Administrative Officer to discuss any continuing concerns. A similar process is used regarding managing director’s selection decisions. In addition to these independent reviews, GAO provides employees with several avenues to raise specific concerns regarding their individual performance ratings. The agency has an administrative grievance process that permits employees to receive expedited reviews of performance appraisal matters. Moreover, employees have access to early resolution efforts and a formal complaint process with O&I and at the Personnel Appeals Board. Despite our continuing efforts to ensure a level playing field at GAO, more needs to be done. The data show that for 2002 to 2005 the most significant differences in average appraisal ratings were among African-Americans at all bands for most years compared with Caucasian analysts. Furthermore, the rating data for entry level staff show a difference in ratings for African- Americans in comparison to Caucasian staff at the entry-level from the first rating, with the gap widening in subsequent ratings. These differences are inconsistent with the concerted effort to hire analysts with very similar qualifications, educational backgrounds, and skill sets. In June 2006, we held an SES off-site meeting specifically focusing on concerns regarding the performance ratings of our African-American staff. Shortly thereafter, the Comptroller General decided that in view of the importance of this issue, GAO should undertake an independent, objective, third-party assessment of the factors influencing the average rating differences between African-Americans and Caucasians. I agree with this decision. We should approach our concern about appraisal ratings for African- Americans with the same analytical rigor and independence that we use when approaching any engagement. We must also be prepared to implement recommendations coming out of this review. While we continue to have a major challenge regarding the average performance ratings of African-Americans, the percentages of African- Americans in senior management positions at GAO have increased in the last several years. I believe that the O&I monitoring reviews, direct access to top GAO management, and the other safeguards have played a significant role in these improvements. Specifically, from fiscal year 2000 to fiscal year 2007, the percentage of African-American staff in the SES/Senior Level (SL) increased from 7.1 percent to 11.6 percent, and at the Band III level the percentages increased from 6.7 percent to 10.8 percent. The following table shows the change in representation of African-American staff at the SES/SL and Band III levels for each year. Furthermore, the percentages of African-Americans in senior management positions at GAO compare favorably to the governmentwide percentages. While the percentage of African-Americans at the SES/SL level at GAO was lower than the governmentwide percentage in 2000, by September 2006, the GAO percentage had increased and exceeded the governmentwide percentage. At the Band III/GS-15 level, the percentage of African- American staff at GAO exceeded the governmentwide percentage in 2000 as well as in 2006. Table 2 lists the GAO and governmentwide percentages. Nonetheless, as an agency that leads by example, additional steps should be taken. We must continue to improve our expectation-setting and feedback process so that it is more timely and specific. We need additional individualized training for designated staff, and we need to provide training for all supervisors on having candid conversations about performance. We also need to improve transparency in assigning supervisory roles, ensure that all staff have similar opportunities to perform key competencies, and hold managers accountable for results. Finally, we will implement an agencywide mentoring program this summer. We expect that this program will help all participants enhance job performance and career development opportunities. Overall, GAO is making progress toward improving its processes and implementing various program changes that will help address important issues. I believe there are two compelling diversity challenges confronting GAO and the federal government. First, is the continuing challenge of implementing sufficiently specific merit-based policies, safeguards, and training in order to minimize the ability of individual biases to adversely affect the outcome of those policies. Second, is the challenge of having managers that can communicate with diverse groups of staff, respecting their differences and effectively using their creativity to develop a more dynamic and productive work environment. For many people, the workplace is the most diverse place they encounter during the course of their day. We owe it to our employees and to the future of our country to improve our understanding of our differences, and to work toward a fairer and more inclusive workplace. Chairman Akaka, Chairman Davis, and members of the subcommittees, this concludes my prepared statement. At this time I would be pleased to answer any questions that you or other members of the subcommittees may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Vigorous enforcement of anti-discrimination laws remains an essential responsibility of government. Moreover, diversity in the federal government can be a key component for executing agency missions and achieving results. Not only is it the right thing to do, but an inclusive work environment can improve retention, reduce turnover, increase our ability to recruit, and improve overall organizational effectiveness. In 2001, the Comptroller General changed the name of the Office of Civil Rights to the Office of Opportunity and Inclusiveness and gave the office responsibility for creating a fair and inclusive work environment by incorporating diversity principles in GAO's strategic plan and throughout our human capital policies. Along with this new strategic mission, the Comptroller General changed organizational alignment of the Office of Opportunity and Inclusiveness (O&I) by having the office report directly to him. Despite our continuing efforts to ensure a level playing field at GAO, more needs to be done. The data show that for 2002 to 2005 the most significant differences in average appraisal ratings were among African-Americans at all bands for most years compared with Caucasian analysts. Furthermore, the rating data for entry level staff show a difference in ratings for African-Americans in comparison to Caucasian staff at the entry-level from the first rating, with the gap widening in subsequent ratings. These differences are inconsistent with the concerted effort to hire analysts with very similar qualifications, educational backgrounds, and skill sets. In June 2006, we held an Senior Executive Service (SES) off-site meeting specifically focusing on concerns regarding the performance ratings of our African-American staff. Shortly thereafter, the Comptroller General decided that in view of the importance of this issue, GAO should undertake an independent, objective, third-party assessment of the factors influencing the average rating differences between African-Americans and Caucasians. We should approach our concern about appraisal ratings for African-Americans with the same analytical rigor and independence that we use when approaching any engagement. We must also be prepared to implement recommendations coming out of this review. Additional Efforts to Enhance Diversity Are Needed and Planned While we continue to have a major challenge regarding the average performance ratings of African-Americans, the percentages of African-Americans in senior management positions at GAO have increased in the last several years. GAO believes that the O&I monitoring reviews, direct access to top GAO management, and the other safeguards have played a significant role in these improvements. Specifically, from fiscal year 2000 to fiscal year 2007, the percentage of African-American staff in the SES/Senior Level (SL) increased from 7.1 percent to 11.6 percent, and at the Band III level the percentages increased from 6.7 percent to 10.8 percent. Furthermore, the percentages of African-Americans in senior management positions at GAO compare favorably to the governmentwide percentages. While the percentage of African-Americans at the SES/SL level at GAO was lower than the governmentwide percentage in 2000, by September 2006, the GAO percentage had increased and exceeded the governmentwide percentage. At the Band III/GS-15 level, the percentage of African-American staff at GAO exceeded the governmentwide percentage in 2000 as well as in 2006. Nonetheless, as an agency that leads by example, additional steps should be taken.
The Homeland Security Act of 2002 created DHS, bringing together 22 agencies and programs responsible for important aspects of homeland security. The intent behind the creation of a single department was to improve coordination, communication, and information sharing among these previously separate entities, thereby increasing their effectiveness in protecting the nation’s security. Each of these organizations brought with it the capacity and expertise to provide training for its particular aspect of homeland security. For example, in several cases such as the Coast Guard and FLETC, this training capacity, as well as the management systems supporting it was transferred intact with the creation of the new department. In other cases, such as CBP and U.S. Immigration and Customs Enforcement (ICE), the training functions of legacy organizations were merged. Table 1 presents information on selected training characteristics of components in our review, including the origin of each component’s training function. In addition, the Act led to the creation of the CHCO position in DHS responsible for, among other human capital topics, oversight and planning of the training of employees. The CHCO, who reports directly to the department’s Under Secretary for Management, has primary responsibility for defining and developing the department’s role regarding training. Figure 1 depicts these positions as well as the department’s major components in the context of DHS’s overall organizational structure. Training both new and current staff to fill new roles and work in different ways will play a crucial part in the ability of federal departments and agencies, such as DHS, as they work to successfully transform their organizations. In 2004, we issued an assessment guide that introduces a framework for evaluating the management of training in the federal government. As presented in our guide, the training process can be segmented into four broad, interrelated phases: (1) planning/front-end analysis, (2) design/development, (3) implementation, and (4) evaluation. For each of these phases, we summarize key attributes of effective training programs and offer related issues and questions. Using this framework, this report identifies selected strategic training practices, with a focus on the planning and evaluation phases, that may offer an opportunity for others in DHS to build on experiences and practices discussed below. The results of a governmentwide survey conducted by the Office of Personnel Management in 2004 on human capital practices and employee attitudes suggest that efforts to identify and build upon examples of good training practice within DHS may be particularly relevant. For each of the eight questions in the 2004 Federal Human Capital Survey that focused on training related topics, the percentage of DHS respondents providing positive responses (typically the top two options on a five-point scale) was lower than the governmentwide average. In fact, the DHS response ranged from 5 to 20 percentage points lower than the governmentwide average for the same questions. For example, 54 percent of respondents at DHS indicated that they received the training they needed in order to perform their jobs, compared to 60 percent governmentwide. Half (50 percent) of DHS respondents said that they were either satisfied or very satisfied with the training they received for their present jobs, as opposed to 55 percent that expressed these levels of satisfaction governmentwide. The largest difference involved having electronic access to learning and training programs, where 51 percent of DHS respondents responded positively, compared to 71 percent governmentwide. A DHS official told us that the department is aware of the challenges reflected in these data and is currently exploring options with the Office of Personnel Management to conduct further analysis. The aim of this work would be to identify areas where DHS might target additional attention as well as provide a baseline for future attitude measures. DHS has made progress in addressing departmentwide training issues and these efforts reflect some of the elements of a strategic approach toward training as described in our previous work. Most training-related activities at DHS—such as planning, delivery, and evaluation—primarily take place at the component level and relate to mission issues. Therefore, any successful approach regarding departmentwide training issues will require the concerted and coordinated efforts of multiple components within DHS as well as the ability of the CHCO to effectively lead a network of different training organizations. The department’s current efforts, although promising, are still in the early stages and they face significant challenges. Unless these challenges are successfully addressed they may impede DHS’s ability to achieve its departmentwide training goals. DHS recently developed a coordinated departmental training strategy that supports broader human capital and organizational goals and objectives. We have previously reported that effective organizations establish clear goals with an authority structure able to carry out strategies and tactics, that is, the day-to-day activities needed to support the organization’s vision and mission. By so doing, a well-designed training function can be directly linked to the organization’s strategic goals and help to ensure that the skills and competencies of its workforce enable the organization to perform its mission effectively. DHS’s department-level training strategy is presented in its human capital and training strategic plans. Issued in October 2004, its human capital strategic plan includes selected training strategies, such as developing a leadership curriculum to ensure consistency of organizational values across the department and using training to support the implementation of the new DHS human capital management system, MAXHR. In July 2005, DHS issued its first departmental training plan, Department of Homeland Security Learning and Development Strategic Plan, which provides a strategic vision for departmentwide training. This plan is a significant and positive step toward addressing departmentwide training challenges. The plan identifies four short-term goals for fiscal year 2006 and one long-term goal for fiscal years 2006 through 2010. Among the short-term goals are such tasks as defining the scope of training activities and improving the governance process between the CHCO office and individual organizational components, supporting the rollout of MAXHR, identifying/implementing best practices, and addressing specific concerns regarding DHS’s training facilities and advanced distributed learning studies. The plan also articulates a long-term goal for DHS to “become a recognized world-class learning organization where managers and supervisors effectively lead people.” Each of these goals is followed by supporting strategies and tactics. For example, to achieve its goal of ensuring the best use of training resources through the identification and implementation of best practices, the plan identifies specific strategies, one of which is to improve the awareness of ongoing DHS training activities among organizational components. This strategy is, in turn, supported by still more specific tactics such as developing a site on the DHS Interactive system to facilitate the sharing of information across the training community. More significant than the fact that DHS issued a training strategic plan document is the fact that DHS followed an inclusive and collaborative process while developing it. We have previously reported that for high- performing, results-oriented organizations, a strategic plan is not simply a paper-driven exercise or onetime event, but rather the result of a dynamic and inclusive process wherein key stakeholders are consulted and involved in the identification of priorities and the formation of strategies. When creating its plan, DHS consulted training leaders at components throughout the department, in addition to others, to help develop and review its content. Several training leaders we spoke with thought highly of this process and the extent to which it provided them opportunities to contribute and comment on the draft plan. DHS has made considerable progress in addressing departmentwide training issues through the development of its first training strategic plan. However, there are areas where future efforts can be improved. Linkage to DHS organizational and human capital strategic plans. Our past work on strategic planning and management practices shows that effective strategic plans describe the alignment between an agency’s long- term goals and objectives and the specific strategies planned to achieve them. Clearly linking training tactics with particular organizational objectives creates a direct line of sight that can both facilitate the ability of staff to work toward mission goals and enable stakeholders to provide meaningful oversight. In the introduction to the DHS training strategic plan, the department’s CHCO highlights the value of this practice stating that “the key purpose of plan is to align our education, training and professional development efforts with the President’s Management Agenda and the Department’s vision, mission, core values and strategic plan.” The DHS training strategic plan contains examples of goals, strategies, and tactics that align with and support goals found in the department’s human capital and organizational strategic plans; however, these linkages are never actually identified or discussed in the plan itself. For example, the DHS training strategic plan contains a goal and several tactics related to MAXHR training. These, in turn, support a MAXHR goal and strategy in the department’s human capital strategic plan as well as the “organizational excellence” goal of the DHS strategic plan. However, the training strategic plan does not show these linkages. Identifying such linkages, either in the training plan itself or in an appendix, would more clearly communicate to both internal and external stakeholders the connections and justifications for specific training goals, strategies, and tactics. DHS’s own human capital strategic plan provides an illustration of one way to communicate linkages between goals and strategies contained in the plan and the broader organizational goals they are intended to support. For example, in an appendix, the DHS human capital strategic plan contains a matrix that directly links strategies, such as developing a new Senior Executive Service (SES) performance management system, with specific objectives contained in the DHS strategic plan as well as the President’s Management Agenda human capital standards for success. Usefulness of performance measures. We have previously reported several key characteristics of effective strategic and management plans, including the need for performance measures. Appropriate performance measures along with accompanying targets are important tools to enable internal and external stakeholders to effectively track the progress the department is making toward achieving its training goals and objectives. To this end, organizations may use a variety of performance measures— output, efficiency, customer service, quality, and outcome—each of which focuses on a different aspect of performance. The DHS training strategic plan contains few specific performance measures for its goals or strategies and all of these are output measures. For example, the plan makes use of output measures in its requirement that certain actions, such as the development of a new management directive or the chartering of a team, be completed by the end of fiscal year 2006, and in establishing a deadline for when reports need to be completed in order to be included in the 2007 plan. In contrast to output measures like these, which gauge the level of activity or effort by measuring whether a particular thing is produced or service performed, other types of measures, such as measures of customer satisfaction or program outcomes, focus on the impact or results of activities. By appropriately broadening the mix of measures it uses and more clearly identifying targets against which DHS can assess its performance, DHS can improve the usefulness of its plan. After we completed our audit work, DHS training officials informed us that they decided to delay the development of performance measures until the rollout of the plan, when they could be developed by individual teams, as needed. They subsequently informed us that these teams will be held accountable to establish further performance measures that are outcome based and results oriented. DHS’s human capital strategic plan again provides an illustration of how the department’s training strategic plan might begin to work toward the inclusion of different types of performance measures. For example, accompanying the strategy that DHS assess the feasibility of establishing a 21st Century Leadership Training and Development Institute, the plan identifies two performance measures—customer satisfaction and cost of delivery—along with specific targets for each. For the customer satisfaction measure, the plan establishes a target of 4.5 on a scale from 1 to 5. The plan also includes specific tactics to achieve the strategy, such as developing and obtaining cross-organizational support, developing measures and methodologies for leadership training, and implementing a learning management system, along with key milestone dates for completing them. The department may benefit from considering the experiences of leading organizations regarding the development of results-oriented performance measurement. In general, results-oriented organizations we have studied that were successful in measuring their performance developed measures that were tied to program goals and demonstrated the degree to which the desired results were achieved, limited to the vital few that were considered essential to producing data for decision making, responsive to multiple priorities, and responsibility linked to establish accountability for results. Similar to the consultative process DHS followed when developing the goals and strategies contained in its training strategic plan, decisions concerning the selection of an appropriate set of performance measures should also be based on input from key stakeholders to determine what is important to them when assessing the department’s performance regarding training. Clear and appropriate performance measures, developed in this way, can also provide DHS with valuable information, especially significant in the current fiscal environment, when it seeks to justify requests for resources from Congress. Under the overall direction of the CHCO office, DHS has established a structure of training councils and groups that cover a wide range of issues and include representatives from each organizational component within DHS. The department is in the process of using these bodies to facilitate communication and the sharing of information within its diverse training community. In some instances, these councils and groups foster greater collaboration and coordination on training policies, programs, and the sharing of training opportunities. We have previously reported that agencies with a strategic approach to training recognize the importance of having training officials and other human capital professionals work in partnership with other agency leaders and stakeholders on training efforts. The Training Leaders Council (TLC) plays a vital role in DHS’s efforts to foster communication and interchange among the department’s various training communities. This council consists of senior training leaders from each of the department’s components as well as representatives from several department-level headquarters staff and support organizations with an interest in training-related issues. Started in October 2004 and formally chartered by the CHCO in March 2005, the TLC’s mission is to establish and sustain a collaborative community with the aim of promoting high-quality training, education, and development throughout DHS. To this end, it functions as a convener of training leaders from throughout the department and provides an overarching framework for several preexisting training groups and councils that were reestablished as standing committees of the TLC. Membership of the TLC consists of senior training leaders from each DHS component. In addition, most of these leaders as well as other training staff serve on one or more of its subgroups. See figure 2 for descriptions of the TLC and each of its subgroups. One key function of the TLC and these other training groups is to serve as a “community of practice” wherein officials can discuss common training challenges and share knowledge and best practices. For example, the Training Evaluation and Quality Assurance Group, composed of DHS training professionals responsible for evaluating and ensuring the quality of DHS training programs, conducted an informal survey of evaluation practices in various components with the intent of identifying effective evaluation approaches. A training official involved in the group told us that this survey was particularly important for the department’s newer organizations, such as the Directorate for Information Analysis and Infrastructure Protection, which need to establish new practices from scratch. According to this official, his directorate and other organizations within DHS plan to use the group as a way to tap into the experience of other components within the department, such as CBP and FLETC, which have considerably more experience with training evaluation. In addition to sharing information about training practices, these groups can also provide a forum for exchanging practical information with the goal of making more efficient use of existing resources. For example, one training official told us that as a result of information obtained at TLC meetings, the official became aware of the existence of free training space available at facilities of two other components located in the Washington, D.C., metro area. Also, as a result of participating in these meetings, the official’s organization was able to send an additional person to the Federal Executive Institute after becoming aware that another component had surplus spaces and was offering them at a reduced price to other components within DHS. Another role carried out by the TLC is to collaborate on the formulation of training policies and advise the department’s CHCO accordingly. For example, the TLC, in cooperation with staff from the CHCO office and an external contractor, conducted a survey of training sites throughout the department in 2004. This study cataloged available physical resources and site capacities with the aim of identifying potential opportunities to share these resources more efficiently, consolidate unneeded or duplicative sites, and identify other opportunities to increase training collaboration and effectiveness. This effort resulted in a series of recommendations that were subsequently incorporated into the department’s training strategic plan. The activities of the department’s Advanced Distributed Learning Group (ADLG) provides another example of how training officials from different components have worked together to develop proposals for solutions to departmentwide challenges. This group identified several issues in the area of technology and learning, including the need for a compatible IT infrastructure across components and the fact that some components lacked established systems with which to coordinate and manage training opportunities and attendance. Working with an outside consultant, the ADLG’s efforts resulted in a proposal that DHS create a new Advanced Distributed Learning (ADL) Program Management Office to oversee the process of setting common standards. This proposal was subsequently included as part of the department’s training strategic plan. In addition, the ADLG’s work also led to DHS entering into a memorandum of understanding with the Office of Management and Budget and the Office of Personnel Management to create a DHS headquarters learning management system. Throughout this process, the ADLG represented the interests of the DHS training community as it worked with representatives from the Chief Information Officer’s office and other functions within the department, as well as outside consultants. Despite these positive steps, DHS’s effort to foster communication and coordination through departmentwide training councils and groups is at a relatively early stage and so far has produced varied results. Some training organizations, such as the TLC and ADLG, have met regularly leading to tangible results, while others such as the Training Evaluation and Quality Assurance Group, have met a few times and have only begun to set the groundwork for substantive coordination and collaboration in these areas. In addition, a training official told us that even active organizations like the TLC have encountered difficulties related to the relative lack of staff support for these efforts. As a result, additional burdens sometimes fall to the leaders and members of these groups who, in addition to serving on one or more departmental training groups or councils, must carry out full- time training positions at their home components. Another way DHS addresses departmentwide training issues is to directly provide training interventions or resources that address selected departmentwide needs, goals, or objectives. Three examples of the areas where DHS has worked to directly provide or support training on the departmental level are the following: (1) training related to the implementation of MAXHR, (2) DHS leadership development, and (3) training related to civil rights and civil liberties. Training for MAXHR implementation. DHS’s new human capital management system, known as MAXHR, represents a fundamental change in many of the department’s human capital policies and procedures that will affect a large majority—approximately 110,000—of its civilian employees. MAXHR covers many key human capital areas, such as pay, performance management, classification, labor relations, adverse actions, and employee appeals, and will be implemented in phases affecting increasing numbers of employees over the next several years. DHS correctly recognizes that a substantial investment in training is a key aspect of effectively implementing MAXHR, and in particular, the new performance management system it establishes. The need for in-depth performance management and employee development training is further supported by the department’s results on the 2004 Federal Human Capital Survey. In this survey, just over half of DHS respondents—51 percent— believe supervisors or team leaders in their work units encourage their development at work, significantly less than the governmentwide response of 64 percent. DHS officials said they plan to educate all affected DHS employees on the details of the new system, how it will affect them, and the purpose of the changes. To do this, the department decided to develop, coordinate, and manage MAXHR training centrally through the CHCO office and offered its first training in May 2005. DHS plans to continue to provide its workforce with MAXHR training over the next several years following a phased approach that takes into account both when individual provisions of the new regulations take effect as well as the different audiences that exist within the DHS community, including human capital personnel, supervisors, and general employees. See figure 3 for a depiction of planned training during 2005 and its intended audiences. The department has worked with contractors to develop training that uses a variety of approaches, including classroom instruction, ADL, handbooks, manuals, and quick reference guides, depending on specific needs. For example, in May 2005, labor relations/employee relations specialists and attorneys in the department received 2-½ days of training on the provisions of the new regulations and the major difference between them and previous programs. Structured as a “train the trainer” type intervention intended to prepare participants to conduct supervisor briefings in their own components, this was an instructor-led course held at sites across the country. In addition to educating individuals about the regulations, procedures, and systems associated with MAXHR and the adoption of a new performance management system, the department also plans to offer training specifically targeted to developing the skills and behaviors that will be necessary for its successful implementation. For example, in July 2005 supervisors began to receive training on techniques for providing meaningful feedback to, coaching, and mentoring employees. DHS leadership development training. Leadership development is another area top management in DHS acknowledged as appropriate for departmentwide training to supplement existing component-level offerings. In 2004, the Secretary of Homeland Security announced the “One DHS” policy that identified the need to establish a common leadership competency framework for the department, as well as a unified training curriculum for current and future leaders. The purpose of this framework was to identify the skills, abilities, and attributes necessary for success as a DHS leader and to establish measurable standards for evaluation. To this end, the CHCO established the DHS Leadership Training and Development Group (LTDG), comprising training officials from each DHS component who combined an expertise in leadership development with personal knowledge of the missions and unique aspects of their particular organizational components. The LTDG met regularly from late 2003 to mid-2004. During this time, the group developed a set of new core leadership competencies for DHS supervisors, managers, and executives, which it issued in April 2004. According to a DHS official, since the development of these new competencies they have been used by one component as part of its own leadership development plan and they have also helped to guide and inform current MAXHR leadership development efforts. DHS has recently taken steps regarding another facet of its leadership development initiative—its SES Candidate Development Program. In June 2005, DHS issued a management directive establishing the SES Candidate Development Program, which included a rigorous selection process and critical leadership development opportunities, such as mentoring, developmental assignments, and action learning designed to give SES candidates experience in different job roles. DHS initially announced that it planned to implement the program in fiscal year 2005, but now may delay doing so until fiscal year 2006. Civil rights/civil liberties training. A third area in which DHS has taken steps to provide or support departmental training involves civil rights and civil liberties. FLETC’s Behavioral Science Division and Legal Division, working with the DHS Office of Civil Rights and Civil Liberties, produced several training interventions, including Web-based, CD-ROM, and in- person programs designed to increase sensitivity and understanding in protecting human and constitutional rights. As part of this effort, FLETC held diversity seminars that focused on promoting understanding and respect of religious practices, particularly involving those of the Arab and Muslim communities. In another example of this effort, the Office of Civil Rights and Civil Liberties produced Web-based training on current policies regarding racial profiling. Our interviews with DHS training leaders suggest that further improvements can be made in communicating the availability of selected departmentwide training programs and resources. Staff at the Office of Civil Rights and Civil Liberties provided copies of its civil rights and liberties programs to training offices at each component in the department. While some senior training officials told us that their components actively disseminated this material by placing it on the component’s training Web site or incorporating it into preexisting courses, other senior training officials we spoke with were unaware of any departmental training on these topics. In addition, other officials told us that their component’s training office had independently developed its own material on Arabic sensitivity training, wholly apart from similar efforts undertaken by others in the department. More specifically, they told us that their development of certain training modules predated the development of very similar modules later prepared by DHS’s Office of Civil Rights and Civil Liberties and FLETC, leading these officials to conclude that they may have been able to assist departmental efforts by sharing their work had they been aware of them. As DHS moves forward, it faces challenges to achieving departmentwide training goals. These challenges include lack of common management information systems, the absence of commonly understood training terminology across components, the lack of specificity in authority and accountability relationships between the CHCO office and components, insufficient planning for effective implementation, and insufficient resources for ensuring effective implementation of training strategies. The formation of DHS from 22 legacy agencies and programs has created challenges to achieving departmentwide training goals. Of particular concern to the training officials we spoke with are the lack of common management information systems and the absence of commonly understood training terminology across components. The training functions at DHS’s components largely operate as they did before the creation of the department, with many of the same policies, practices, and infrastructures of their former organizations, and within these organizations are, for the most part, the same training leaders. It will take time for these organizations to evolve into a coordinated, integrated department. We have previously reported that successful transformations of large organizations, even those faced with less strenuous reorganizations and pressure for immediate results than DHS, can take from 5 to 7 years to fully take hold. One issue DHS officials raised was the lack of common or compatible management information systems, such as information technology or financial management, which can inform decision makers’ efforts to make efficient use of training resources across components. For example, DHS officials stated that a key challenge they encountered involved the difficulty of knowing what others were doing outside their particular offices or components. DHS lacks any unified sourcebook that employees could consult for the names, telephone numbers, and other relevant information of key contact persons in areas such as acquisition. Obtaining accurate information about resources and products available in the marketplace as well as data on users, vendors, and kinds of work has been a challenge to that effort. Another issue cited by officials concerned the lack of compatibility between learning management systems across components. In addition, some training officials expressed concerns about the accuracy or timeliness of some training data, which can limit or at least considerably delay their ability to track and fully account for funds spent on training and training-related travel. DHS has several efforts under way to address these issues, including the development of an online training facilities inventory intended to increase awareness of existing resources across the department and its decision to begin developing common ADL policies and standards. Officials also told us that there was little or no common understanding among DHS organizational components regarding the meaning of such basic terms as “subject matter expert,” “orientation,” and even “training.” The lack of commonly understood terminology has presented challenges when officials from different components, including those participating in departmental training councils and groups, try to share practices with each other. These officials told us that the lack of commonly understood terminology can also affect their interactions with outside entities, such as contractors and state and local agencies. Besides facilitating communications and enabling components to share practices, a DHS official told us that a common nomenclature would increase the transparency of training practices to external contractors as well as the internal DHS training community. The department’s training strategic plan calls for the creation of a common training language and glossary of terms in fiscal year 2006, and officials told us that they are currently in the early stages of creating such a glossary. An effective management control environment appropriately assigns authority and delegates responsibility to the proper personnel to achieve organizational goals and objectives. In such an environment, staff members who are delegated responsibility are given corresponding authority. In light of this, DHS’s adoption of a “dual accountability” governance structure in 2004 presents certain challenges. Under this concept, heads of organizational components and the CHCO share responsibility for effective training in DHS. With a shared responsibility for DHS training, both the CHCO and component heads should have appropriate authority for making decisions regarding training. DHS does not specify how authority for training matters will be shared between the CHCO office and components for budgeting, staffing, and policy (e.g., determining which training functions, if any, should remain with components or be performed by DHS headquarters). The DHS management directive on training currently in place is a high-level two- page document that provides very few specifics on policies, procedures, and authorities for the CHCO office and the components. The department recognizes the need to clarify the responsibilities and authorities of the CHCO office and the components, as indicated by its inclusion in the DHS training strategic plan. Many of the tactics included in the plan would be difficult to successfully implement without first having a clear understanding of the responsibilities and authorities of the key organizations involved. More specifically, in the absence of clear authority relationships, decisions regarding how particular component training goals and strategies are to be incorporated in the DHS training strategic plan, or which training facilities should be consolidated to achieve departmental efficiencies, will be difficult to make. Without moving ahead with this effort in a timely fashion and completing the process of specifying how the CHCO office and components will share authority over training matters, it will be difficult for DHS to make the progress necessary on its departmentwide training agenda if it is to effectively implement the many strategies and tactics planned for fiscal year 2006. In addition, DHS’s efforts at coordinating training across components and clarifying roles and relationships between departmental functions and organizational components may be further hampered by the fact that the management directive governing the integration of the human capital function claims that the Coast Guard and the Secret Service are statutorily exempt from its application. We found no reasonable basis to conclude that the directive could not be made applicable to them and are not aware of any explicit statutory exemption that would prevent the application of this directive. Moreover, exempting the Coast Guard and the Secret Service from the provisions of this directive casts doubt on the authority and accountability relationships between these components and the CHCO, potentially complicating the department’s objective of clarifying the responsibilities, accountability, and authorities of the CHCO office and the components set forth in DHS’s training strategic plan. In and of itself, DHS’s dual accountability authority structure is not an obstacle to implementation of departmentwide training efforts. However, without detailed implementation plans, it presents potentially significant challenges. Because of this shared authority, DHS will need to take great care when planning for departmentwide training initiatives involving multiple organizational components to ensure that resources are aligned with organizational units performing activities, especially related to cross- organizational sharing of training and delivery of common training. The lack of comprehensive and rigorous planning can lead to confusion over responsibilities, lack of coordination, and missed deadlines. Regular and rigorous use of detailed implementation plans is necessary to implement decisions and carry out activities in a coordinated manner. After we completed our audit work, DHS informed us that it plans to establish 31 tactic teams to take ownership of each of the tactics presented in the DHS training strategic plan to be completed by the end of fiscal year 2006. As of mid-August 2005, DHS provided us with documentation indicating that 3 of these teams have been established to date. These teams appear to have taken promising steps toward the establishment of detailed plans for implementing their respective training tactics by developing draft objectives, deliverables, and closure criteria. But as fiscal year 2006 approaches, time is short for the CHCO office and the components to establish the remaining teams and then take the actions necessary to develop and put in place the detailed plans that will be critical for effectively implementing DHS’s many training tactics by the end of the coming fiscal year. The TLC’s ADLG has made use of this type of detailed approach in a report proposing a distance learning architecture for the department. Appended to its report is a detailed plan outlining the major activities, milestones, resources, and components needed to support the successful implementation of the proposal. Several training officials told us they were concerned about the lack of dedicated resources and related capacity to carry out departmental initiatives. At the time we started our review, the CHCO office had only one full-time permanent employee dedicated to carrying out these activities; consequently, both training leaders and staff from organizational components were relied on to contribute to departmentwide efforts. After we concluded our audit work, a DHS official told us that the CHCO office had recently hired two additional full-time training staff: an ADL program manager and a staffer to oversee a recently approved SES candidate development program and headquarters operational leadership development. Individual components have also provided some assistance to departmentwide efforts through the appointment of temporary personnel. In late 2004, CBP and FLETC each detailed a staff member to the CHCO office to work on training-related projects. In addition, DHS has contracted for services to address selected departmentwide issues, such as setting common standards for ADL and reviewing DHS training facilities. DHS’s departmental training councils and groups are almost exclusively staffed by component training leaders who already have full-time training commitments. The department’s training strategic plan identifies many tactics for fiscal year 2006—including creating a common training language and glossary of training terms, establishing a repository for course catalog information, and developing a DHS training Web site—that will require considerable staff support to implement. Successful and timely completion of these and other initiatives will depend on sufficient resources being provided. It is essential for federal agencies to ensure that their training efforts are part of—and are driven by—their organizational strategic and performance planning processes. We have reported that aligning training with strategic priorities and systematically evaluating training activities play key roles in helping agencies to ensure that training is strategically focused on improving performance and meeting overall organizational goals. Strategic training practices in several DHS components or programs may provide models or insights to others in the department regarding ways to improve training practices. In areas where some components employed strategic practices, other components did not. We have previously reported that agencies demonstrating a strategic approach to training align their training efforts with overall strategic priorities. To do this, agencies can employ a variety of practices, such as linking training activities to strategic planning and budgeting and performing front-end analysis to ensure that training activities are not initiated in an ad hoc, uncoordinated manner, but rather are focused on improving performance toward the agency’s goals. Some components in DHS applied the strategic practice of aligning training with organizational priorities, while others did not. CBP links its new and existing training activities to its strategic priorities when planning for its strategic initiatives and expenditures. Importantly, the head of training at CBP is at the decision-making table with other CBP leaders to help establish training priorities consistent with the priorities of the CBP Commissioner. Relevant program managers are asked, “What training do you need to achieve the goals in your strategic plan?” Such discussions took place during planning for CBP’s custom trade pact initiative. During each budget cycle, CBP’s central training office issues a “call for training” to its mission and mission support customers to estimate CBP’s training needs for existing training activities and prioritize these needs based upon the Commissioner’s priorities. Prior to establishing this process, training was mostly decided on a first come, first serve basis without clear and transparent linkages to organizational priorities. CBP’s current process results in an annual training plan in which training needs are identified by priority as well as major occupational type, such as border patrol agent or CBP officer. Training decisions are based on whether training requested is critical, necessary, or “nice to have.” During fiscal year training plan implementation, CBP tracks actual training activity through a central database to determine whether CBP is using its planned training resources. By tracking plan usage through a centrally managed database, CBP is able to reallocate unused training funds prior to the end of the fiscal year for either training activities that were not included in its original plan because of capacity constraints, or for emerging training priorities. The Coast Guard has adopted a strategic and analytic approach to training through its use of the Human Performance Technology (HPT) model—a front-end training assessment process to determine the cause of performance problems. The process starts with the assumption that many factors influence individual and unit performance and it is important to determine what the factors are before concluding that training is the solution. From its HPT analysis, the Coast Guard determines whether training is needed or whether another type of solution, such as a policy change, would be more appropriate. For example, in addressing a problem in aviation maintenance, a Coast Guard working group looked at likely causes of its performance problems and concluded that focusing on making aviation maintenance training better was not the only solution. More specifically, training officials encountered problems with job dissatisfaction and subpar performance from aviation chief warrant officers. In this case, training officials used HPT to analyze the nature of work performed by those responsible for aviation maintenance and concluded that there was not a good match between job skills and responsibilities. Specifically, over the last 20 years, the scope and nature of the work performed by chief warrant officers changed significantly from maintaining components to managing aircraft systems. Performance problems were mainly caused by significant changes in the job functions of these officers over the years rather than by a lack of adequate training. In cases where the HPT analysis concludes that training is warranted, a training analysis is performed to determine the specific training interventions. For example, in implementing activities related to the Maritime Transportation Security Act, the Coast Guard analyzed its training needs through the HPT process to determine training necessary to help maritime inspectors reduce the exposure of ports and waterways to terrorist activities. The analysis identified the skills and knowledge necessary for new maritime inspector tasks and provided training interventions, such as developing job aids and targeted classes, to prepare inspectors for the tasks most relevant to support their new role. New courses were piloted and then subjected to multilevel evaluations to assess their effectiveness and potential impact on employee performance. Agencies demonstrating a strategic approach to training employ a variety of practices, such as systematically evaluating training, actively incorporating feedback during training design, and using feedback from multiple perspectives. Several components and programs we examined at DHS used these practices, while others did not. One commonly accepted model used for assessing and evaluating training programs consists of five levels of assessment (see fig. 4). In our review, virtually all components captured Level I data focusing on end-of-course reactions, while several also collected Level II data focusing on changes in employee skill, knowledge, or abilities. Several components evaluated, or were planning to evaluate, the impact of selected training programs on individual behavior, represented by Level III evaluations. To measure the real impact of training, however, agencies need to move beyond data focused primarily on inputs and outputs and develop additional indicators that help determine how training efforts contribute to the accomplishment of agency goals and objectives. At a couple of components, DHS officials told us they conducted Level IV evaluations, which assess the effectiveness of training interventions. We found no examples of the department or its components measuring the return on investment of training activities (Level V). Training effectiveness should be measured against organizational performance; however, not all levels of training evaluation require or are suitable for return on investment analysis. Determining whether training programs merit the cost of using such an approach depends upon the programs’ significance and appropriateness. CBP takes a systematic approach to evaluating its training activities through its National Training Evaluation Program (NTEP) to help program managers and trainers make more informed decisions on the effectiveness of training courses and their delivery. Despite the fact that CBP is a large and decentralized organization, NTEP has enabled it to collect course evaluation information and make this information available to a wide range of users in a timely manner. NTEP has also standardized evaluation data to allow for comparison of training throughout various field locations. Before the rollout of NTEP, CBP did not use a standard mechanism for collecting evaluation data, which, according to a CBP official, made it difficult to gather evaluation data nationally. CBP focuses on collecting both end-of-course student reactions (Level I) and supervisor assessments of student on-the-job performance after attending the training (Level III). Electronic or paper-based evaluations are entered into the NTEP information system. The “close to real time” online data enables supervisors to perform trend analysis on training quality and provides opportunities for them to troubleshoot training deficiencies and identify high-performing courses. The NTEP online system allows CBP employees access to evaluation data on a need to know basis with four levels of access, while enabling them to locate evaluation data for any training class by date. Evaluation reports are aggregated for review by senior CBP officials. A CBP official told us that collecting course evaluation data is labor intensive, especially since many field operations still use paper processes. In addition, CBP has experienced a relatively low submission rate for Level I evaluation data for many of its training classes. The official told us that this was especially true for end-of-course reactions from staff in the field, where only about one-third of officer-related course participants submit evaluation forms. Given cost and labor challenges, CBP has targeted areas for evaluation that it believes are important, such as training related to its “One Face at the Border” initiative. In addition, agencies with a strategic approach to training do not wait until the conclusion of a training intervention to conduct evaluations. Rather, they approach evaluation through an iterative process capable of informing all stages of training. DHS’s CHCO office used multiple forms of feedback from employees to develop its training strategy for MAXHR. From February through April 2005, the department administered surveys and conducted focus groups to obtain information on the needs, attitudes, and reactions of different communities affected by MAXHR. Shortly after issuing its new human capital regulations, the department provided basic information to all employees on the nature and timeline of changes they could expect under MAXHR through a Web broadcast. After the broadcast an online survey was used to obtain feedback from employees regarding the broadcast itself and their general feelings and concerns about the MAXHR initiative. DHS followed this initial survey with a larger survey to gather additional feedback on how information regarding MAXHR had been communicated, as well as specific areas where employees wanted additional information. Concerns about the need for training were prominent among the more than 9,000 responses received, with respondents ranking training as the second most serious challenge to the successful implementation of MAXHR. According to a senior DHS official, the survey results will inform subsequent training and communication efforts. DHS also collected evaluative feedback by conducting a series of focus groups held in locations across the country. The aim of these sessions was to validate the design of the performance management program established under MAXHR and identify concerns that would inform the development of additional training. Consistent with the strategic training practice of seeking out different perspectives when redesigning and assessing training efforts, DHS staff held separate focus groups for bargaining unit employees, non-bargaining unit employees, and supervisors and managers at all of the locations visited. This enabled them to identify issues of particular concern to each of these groups as well as issues common to all three. For example, both the bargaining unit and non-bargaining unit employee focus groups raised concerns about supervisors having inadequate skills for fairly administering the new performance management system. This concern was also shared by supervisors and managers themselves who expressed the need for additional skills training in areas such as goal setting and providing performance feedback. The sessions validated the CHCO office’s plans to offer performance management training to supervisors and managers before the implementation of the new system and assisted in refining issues for future training. FLETC’s methods for evaluating its major training programs include feedback from multiple perspectives when examining the benefits of training on actual employee job performance. FLETC’s Level III evaluations obtain feedback from both trainees and their supervisors to inform future improvements to training curricula. Evaluation results are compiled into a comprehensive report used during FLETC’s periodic curriculum reviews on its major training programs, such as the Natural Resource Police Training Program. The report contains detailed feedback from both the trainee and supervisor perspectives 6 months to 1 year after the trainee has attended the training program. For example, for the Natural Resource Police Training Program, FLETC analyzed how well the program prepared trainees in all aspects of their jobs. In this case, analysis identified those courses that had benefited program trainees the least— including determining speed from skid marks and death notification. Training designers can use report information to improve program curricula and refocus training on knowledge and skill areas most critical to performing the job. In addition to Level III evaluation results, its training designers make program and individual class changes by using other methods of evaluation, such as direct student feedback after classes and trainee examinations, which determine how well the trainees understood the course material immediately following the program. The creation of DHS resulted in significant cultural and transformational challenges for the department. We have previously reported that training is one way organizations successfully address cultural issues while simultaneously facilitating new ways to work toward the achievement of organizational goals. Among the DHS components in our review, some merged cultures from different legacy organizations (CBP, ICE), another component came as a small organization that greatly expanded when joining DHS (Federal Air Marshal Service), while others joined DHS intact (Secret Service, Coast Guard, FLETC), and still another was previously a part of a larger legacy organization (CIS). Each component faces the need to find a way to identify itself as part of the larger DHS organization, that is, with a sense of affiliation rather than as an outsider looking in. At the same time, components must either maintain their existing cultures or develop new cultures to adapt to changing missions and needs. The key is to build upon positive aspects of the components’ cultures as the larger organization develops its own culture. Agencies that undergo successful transformations change more than just their organizational charts, they also make fundamental changes in basic operations, such as how they approach strategic human capital management. DHS understands this, and the MAXHR initiative is part of an effort by the department to fundamentally change its approach to human capital management by establishing a personnel system that is flexible, performance oriented, and market based. The Secretary of Homeland Security and other top officials have actively supported the role of training in implementing these changes by making it a leadership expectation that all DHS executives, managers, and supervisors be personally involved as both participants in and supporters of MAXHR training efforts. The CHCO office, working with the assistance of outside contractors, has developed several training interventions aimed at providing these groups with the tools and information needed to champion the benefits of a performance-based culture and successfully implement MAXHR in their components. In August 2005, DHS sponsored a 2-½ day training program for 350 to 400 of the department’s senior executives and flag officers. The program covered a range of topics, including an update on current DHS priorities; techniques and best practices for how senior leaders can effectively support and implement these priorities; as well as specific management, communication, and training approaches that can be used to support the creation of a performance-based culture. The Secretary, Deputy Secretary, and Under Secretary for Management all participated in the program, which also featured presentations from human capital and organizational change experts from outside the department. In addition to its focus on MAXHR implementation, which included both large and small group sessions wherein participants could discuss performance management and share information on practices, the course also provided a forum for the department’s top leadership and senior executives to review the then recently issued recommendations resulting from the Secretary’s Second Stage Review process. Another training intervention sponsored by the department directly targets managers and supervisors who will be responsible for carrying out many of the key behaviors associated with the new system and whose active support is viewed by DHS as critical for achieving the transformation to a performance-based culture. The 2-½ day program focuses on developing and improving interpersonal, managerial, and other so-called soft skills. DHS expects to provide the training to approximately 12,000 managers and supervisors throughout the department. On the component level, training has also played an important role in CBP’s effort to transform from the traditional, largely siloed approach used by its legacy agencies when protecting our borders to a new integrated concept that it believes is more in line with its current needs. Officials noted that the merger into CBP led to some resistance from employees who had not yet understood or accepted the reasons for the merger. These same officials acknowledged that they must continue to work at informing employees why changes were made and provide vehicles for better integration through training. For example, in the “One Face at the Border” initiative, supervisory training has incorporated some elements of cultural integration by including a session on bridging the culture gap. Officials at CBP designed and piloted a training module to be added to the supervisory curriculum specifically targeting how they can more effectively understand the value and perspective of staff coming out of the legacy organizational cultures. In addition, training played a key role in facilitating the transition of CBP’s workforce from its three legacy organizations. Training for the new CBP officer and CBP agriculture specialist positions aimed to improve coordination and communication across inspection functions and enhance the flexibility of CBP’s workforce. Specifically, CBP created a series of training courses to provide former Customs and former Immigration and Naturalization Service officers with the knowledge and skills necessary to carry out the responsibilities of this new position. To develop this training, CBP-wide working groups identified and validated critical tasks for the new frontline CBP officer to perform. A mix of training delivery methods were used (i.e., e-learning and instructor led), and classroom knowledge and skills were reinforced with on-the-job training. CBP provided extensive train-the-trainer courses so that trainers could return to their field sites and instruct officers there. (See app. 2.) DHS must continue to make progress on three important aspects of training as it moves forward: (1) forging an effective role for training at the departmental level and implementing its departmentwide training strategy; (2) taking a strategic approach to training practices, in part by building upon examples of good practice to be found among its former organizations, as well as considering other examples of strategic practices; and (3) finding ways that training can help to foster organizational transformation and cultural change within the department. To date, DHS has taken positive steps in these areas, yet significant challenges lie ahead. The ability to make decisions from a departmentwide perspective and then effectively implement them will help determine whether training in DHS achieves its intended results. Strong leadership will play a critical role in this process. To be successful, DHS will need to have both a clear plan of action as well as the ability to anticipate and overcome several implementation challenges. The creation of the TLC and the development of the department’s first training strategic plan both represent a good start in this process. Better performance measures, more specific milestones, and the inclusion of performance targets would make DHS’s strategic training plan a more useful tool for both internal and external stakeholders to use in tracking the department’s progress toward achieving its training objectives. Clarifying authority relationships between the CHCO and component heads, developing detailed implementation plans, and giving appropriate attention to providing resources to implement training initiatives when setting funding priorities are also likely to be critical factors in building and sustaining an effective role for department-level training at DHS. A strategic approach toward training is also very important as DHS seeks to build on its current efforts and strives to move forward. As we have noted, some programs and components in DHS already use specific strategic training practices, and other components within the department can benefit from their example. As DHS implements new training programs, such as the large-scale, multistage training being developed to support the implementation of MAXHR, it has a valuable opportunity to reflect the lessons learned from these experiences in subsequent departmentwide training efforts. Finally, the transition to a new department has brought with it cultural challenges, and training can play a role in both defining and refining an effective DHS culture without sacrificing the cultural history of its components. To help DHS establish and implement an effective and strategic approach to departmentwide training, we recommend that the Secretary of Homeland Security take the following actions: adopt additional good strategic planning and management practices to enhance the department’s training strategic plan by (1) creating a clearer crosswalk between specific training goals and objectives and DHS’s organizational and human capital strategic goals and (2) developing appropriate training performance measures and targets; clearly specify authority and accountability relationships between the CHCO office and organizational components regarding training as a first step to addressing issues DHS has identified for fiscal year 2006; ensure that the department and component organizations develop detailed implementation plans and related processes for training initiatives; and when setting funding priorities, give appropriate attention to providing resources to support training councils and groups to further DHS’s capacity to achieve its departmentwide training goals. We provided a draft of this report to the Secretary of Homeland Security for comment and received written comments from DHS that are reprinted in appendix III. In addition, we received technical comments and clarifications, which we incorporated where appropriate. DHS generally agreed with our recommendations. We will provide copies of this report to the Secretary of Homeland Security and other interested parties. Copies will also be provided to others upon request. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 9490 or [email protected]. Major contributors to this report were Kimberly Gianopoulos, Assistant Director; Peter J. Del Toro; Robert Yetvin; and Gerard Burke. To achieve our objectives, we reviewed training at the Department of Homeland Security (DHS) at both the departmental and component levels. When examining training at the departmental level, we collected, reviewed, and analyzed the department’s training rules, procedures, policies, and organizational charts; departmental, human capital, and training strategic plans; human capital and training management directives; Internet and intranet Web pages; and other relevant documents. To further our understanding of training at DHS and the issues and challenges involved, we interviewed training and human capital officials in the Office of the Chief Human Capital Officer and the leaders and coleaders of DHS’s training councils and groups. We also observed the January 2005 meeting of the Training Leaders Council. We supplemented our review of departmental training at DHS by examining the department’s effort to use training related to MAXHR to foster transformation and cultural change in the department. In addition, we reviewed training at major organizational components in DHS and selected the six largest components based on staff size and budget. Using these criteria, we reviewed training at Customs and Border Protection (CBP), Citizenship and Immigration Services, Immigration and Customs Enforcement (including the Federal Air Marshal Service, the Federal Protective Service, and the Leadership Development Center), the Coast Guard, the Secret Service, and the Transportation Security Administration. See figure 5 for a depiction of the DHS organizational structure in place during the time of our review. These components collectively represent about 95 percent of the total staff at DHS. We also included the Federal Law Enforcement Training Center because of the special role it plays in training employees from other DHS components. When examining training at selected components, we reviewed component-level strategic, human capital, and training plans when available; training budget requests and expenditure documents; training procedures, policies, and organizational charts; rules and policies for identifying and prioritizing training programs; Internet and intranet Web pages; selected training course materials; and other relevant documents produced by these components. To further our understanding of training at the component level, we also interviewed training officials at each of the selected components and identified these individuals based on their knowledge, experience, and leadership roles. We conducted our interviews at component headquarters or field offices located in the Washington, D.C., area. In addition, as part of our review of DHS’s efforts to foster transformation and cultural change, we observed training related to CBP’s “One Face at the Border” initiative in northern Virginia. To help determine whether DHS used a strategic approach in planning and evaluating its training activities at the departmental or component levels, we referenced criteria contained in our guide for assessing strategic training and development efforts in the federal government. This guide outlines a framework for assessing training efforts, consisting of a set of principles and key questions that federal agencies can use to ensure that their training investments are targeted strategically and not wasted on efforts that are irrelevant, duplicative, or ineffective. We selected our case examples based on their suitability for demonstrating specific strategic training practices. Other components within DHS may, or may not, be engaged in similar practices. To determine whether DHS followed leading management practices in planning and implementing departmentwide training, we also drew on our previous work on strategic planning and effective management practices. We did not include within our scope training intended for audiences external to DHS, and we generally covered training and training management in effect during the period in which we did our work. We conducted our work from November 2004 through July 2005 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from DHS, which are reprinted in appendix III. The comments are addressed in the Agency Comments section of this report. One of the initial goals for creating DHS was to better protect the United States from terrorists entering the country, and ports of entry are the means through which terrorists can enter. The creation of CBP within DHS merged border inspection functions at U.S. ports of entry, which had previously been performed by three separate agencies. Known as “One Face at the Border,” this initiative created the positions of CBP officer and CBP agriculture specialist that combined aspects of three former inspector functions. This initiative aimed to improve coordination and communication of inspections to better protect the nation’s borders from terrorists as well as to improve entry for legitimate travel and trade. To successfully make the transition to these new positions, significant training was needed. Specifically, CBP created a series of training courses to provide former U.S. Customs and former Immigration and Naturalization Service officers with the knowledge and skills necessary to carry out the responsibilities of this new position. In addition, CBP officers received training to meet CBP’s new mission priority of terrorism prevention. Although the emphasis was on cross-training legacy officers, the new curriculum was also geared to new hires. Because agricultural inspections are more specialized, CBP officers receive training sufficient to enable them to identify potential agricultural threats, make initial regulatory decisions, and determine when to make referrals to CBP agriculture specialists. More detailed agricultural inspections are performed by these specialists who have substantial training and background in agricultural issues. A variety of training delivery methods were used (e.g., e-learning and classroom) and these training methods were reinforced with extensive on- the-job training. In addition to traditional content areas (e.g., cross-training for former U.S. Customs officers includes courses on immigration fundamentals and immigration law), training courses also covered CBP’s new priority mission of preventing terrorism (e.g., training in detecting possible terrorists and fraudulent documents, honing interviewing skills, and making appropriate referrals to staff for additional inspection). CBP emphasizes on-the-job training in an effort not to place inspectors on the job without direct supervisory and tutorial backup. Training for new recruits has also been modified to include a preacademy orientation program at the port location where the recruit will eventually work before he or she receives academy training. This is a 72-day course for CBP officers and a 46-day course for CBP agriculture specialists. CBP’s main strategy to prepare for field delivery of training was to provide extensive train-the-trainer courses so that trainers could return to their field sites and instruct officers there. Training priorities were established with the idea of spacing the training out so that field offices would not be overwhelmed. For example, CBP rolled out its primary cross-training to airports, while antiterrorism training was rolled out to land borders. Officials reported that cross-training benefited CBP officers since they have gained more knowledge by learning both immigration and customs laws and procedures. This increase in knowledge has the potential benefit of providing more variety in job tasks as well as increasing the opportunities for advancement since an officer can now apply for supervisory-level positions that had previously been open only to former U.S. Customs or Immigration and Naturalization Service officers. Change has not come about without challenges, however, as many officers were reported to have resisted changes to their responsibilities, mainly related to the difficulties in learning a new set of procedures and laws. Officials noted that there has been an enormous amount of required training for CBP officers, and it can sometimes be overwhelming. For former officers, in addition to completing an extensive cross-training schedule and new training related to terrorism prevention, there are many other required courses related to their mission. For example, training modules are required in areas such as body scanning, hazardous materials, cargo inspection, and seized assets. Although staffing challenges may ultimately be relieved with trained officers able to perform dual inspections, officials noted that it has been extremely difficult to take staff off-line to complete the “One Face at the Border” training. One official said that classes have been very difficult to schedule because of the constant pressure to staff operations. For example, in one case, a class was canceled right after it began because the trainees were pulled out to staff their inspection booths. This official also noted that trainers have had to be very flexible to accommodate staff schedules to ensure that training occurs.
Training can play a key role in helping the Department of Homeland Security (DHS) successfully address the challenge of transformation and cultural change and help ensure that its workforce possesses the knowledge and skills needed to effectively respond to current and future threats. This report discusses (1) how DHS is addressing or planning to address departmentwide training and the related challenges it is encountering; (2) examples of how DHS training practices, specifically those related to planning and evaluation, reflect strategic practices; and (3) examples of how DHS uses training to foster transformation and cultural change. DHS has taken several positive steps toward establishing an effective departmentwide approach to training, yet significant challenges remain. Progress made in addressing departmentwide training issues, but efforts are still in the early stages and face several challenges. Actions taken by DHS include issuing its first training strategic plan in July 2005, establishing training councils and groups to increase communication across components, and directly providing training for specific departmentwide needs. However, several challenges may impede DHS from achieving its departmental training goals. First, the sharing of training information across components is made more difficult by the lack of common or compatible information management systems and a commonly understood training terminology. Second, authority and accountability relationships between the Office of the Chief Human Capital Officer and organizational components are not sufficiently clear. Third, DHS's planning may be insufficiently detailed to ensure effective and coordinated implementation of departmentwide training efforts. Finally, according to training officials, DHS lacks resources needed to implement its departmental training strategy. Examples of planning and evaluation of training demonstrate some elements of strategic practice. Specific training practices at both the component and departmental levels may provide useful models or insights to help others in DHS adopt a more strategic approach to training. We found that some components of DHS apply these practices, while others do not. For example, Customs and Border Protection (CBP) aligns training priorities with strategic goals through planning and budgeting processes. In the area of evaluation, the Federal Law Enforcement Training Center obtains feedback from both the trainee and the trainee's job supervisor to inform training program designers in order to make improvements to the program curriculum. Training has been used to help DHS's workforce as it undergoes transformation and cultural change. The creation of DHS from different legacy organizations, each with its own distinct culture, has resulted in significant cultural and transformation challenges for the department. At the departmental level, one of the ways DHS is addressing these challenges is by encouraging the transformation to a shared performance-based culture through the implementation of its new human capital management system, MAXHR. DHS considers training to be critical to effectively implementing this initiative and defining its culture. Toward that end, the department is providing a wide range of training, including programs targeted to executives, managers, and supervisors. For example, at the component level, CBP has developed cross-training to equip employees with the knowledge needed to integrate inspection functions once carried out by three different types of inspectors at three separate agencies.
529 plans are a college savings vehicle that originated in the states. In 1986, Michigan created the Michigan Education Trust to operate what is generally considered the first state prepaid tuition plan. In 1996, Congress enacted Section 529 of the Internal Revenue Code, setting out requirements that state 529 plans must meet to be exempt from federal tax. The Economic Growth and Tax Relief Reconciliation Act of 2001 included a provision making earnings included in distributions from 529 plan accounts entirely tax-exempt as long as they are used to pay for qualified higher education expenses. For other key legislative actions and the value of assets invested in these plans, see fig. 1. The number of 529 plan accounts has also increased since the plans were granted expanded federal tax advantages in 2001 (see fig. 2). 529 plans are a state-sponsored investment or savings vehicle whose purpose is to encourage people to save for college. Contributions to 529 plans are made with after-tax dollars and are not deductible for federal tax purposes. Annual contributions in excess of $13,000 are generally subject to federal gift taxes.necessary to provide for the qualified education expenses of the Total contributions may not exceed the amount beneficiary, which is determined by each state; however, individuals may open 529 plan accounts in multiple states. Earnings on contributions grow tax-deferred. When a distribution is made from a 529 plan, the earnings portion is tax-exempt as long as it is used to pay for qualified education expenses. Taxpayers must report to the Internal Revenue Service (IRS) whether the distribution was for qualified higher education expenses. Distributions not used for qualified higher education expenses can be made to either the account owner or beneficiary, but the portion of the nonqualified distribution consisting of investment earnings is taxable and subject to an additional 10 percent penalty. The federal penalty does not apply in some circumstances, for example if the distribution was considered nonqualified because the beneficiary died or received a scholarship. While section 529 provides that account owners and beneficiaries may not directly or indirectly control how contributions or earnings are invested, in 2001, IRS issued a notice setting out a rule permitting a change in investment strategy once per year and upon a change in the designated beneficiary of the account. For 2009 only, this was increased to twice per year. There are few federal restrictions on 529 plan participation. For example, there are no income limits and almost anyone can initially be named as a beneficiary—an individual may open a 529 plan account for a child, grandchild, friend, spouse, or for themselves. Further, the 529 plan account owner may change the beneficiary at any time, though the subsequent beneficiary must be a member of the family of the original beneficiary in order for this change to be tax-exempt. Because 529 plans are state-sponsored investments, states determine whether and what type of plans to offer (i.e., prepaid tuition or savings) as well as the eligibility criteria (for example, at the time of application prepaid tuition plans may require either the account owner or beneficiary to be a resident of the state administering the plan whereas residents and nonresidents can invest in most states’ college savings plans); administrative and investment fees; and associated state tax benefits. Almost all states offer a college savings plan. In these plans, individuals purchase interests or shares in a trust established by the state. In most cases, the trust assets are invested in mutual funds. The shares in college savings plans can be sold directly by the state or through an external program manager hired by the state (direct-sold) as well as through a financial advisor or broker (advisor-sold). College savings plans may offer a number of investment options, which often include stock mutual funds, bond mutual funds, and money market funds. These investment options can vary in terms of risk and return, ranging, for example, from investments that are insured by the Federal Deposit Insurance Corporation (FDIC) to options that are almost completely invested in aggressive-growth funds. Many plans offer age-based portfolios that shift automatically into more conservative investments as the beneficiary approaches college age. Fifteen states also offer prepaid To help run their plans, states may tuition plans to their state residents.employ marketing staff, advisors, financial consultants, or other experts. Some states offer a variety of tax advantages that can include a state deduction or non-refundable credit for plan contributions and tax-deferred earnings. These benefits may apply only to residents who make contributions to their own state’s plan or, in a few states, may include contributions made to other states’ plans. Although most 529 college savings plans have been modeled after mutual funds, 529 plans are regulated differently than mutual funds under the federal securities laws because they are regulated as municipal securities. As municipal securities, 529 plans are exempt from the registration and reporting requirements of the federal securities laws. However, broker-dealers selling 529 plans (advisor-sold plans) must comply with the rules of the Municipal Securities Rulemaking Board (MSRB). Specifically, MSRB requires broker-dealers who sell 529 plans to follow certain guidelines, such as having reasonable grounds to believe that the recommended product is suitable for the customer; disclosing certain information, such as plan fees and state tax implications; following certain requirements when advertising; and posting disclosure documents on its Electronic Municipal Market Access Website. However, MSRB rules do not apply to state issuers when they market their 529 plans directly to the investor without the assistance of a broker-dealer (direct- sold plans). In 2004, in response to concerns that 529 plan disclosures were inadequate, CSPN, after working with the Securities and Exchange Commission, the MSRB, and the National Association of Securities Dealers, developed voluntary disclosure principles to be adopted by state issuers on plan performance, fees, and state tax information, among other things. These principles were designed to enhance investors’ ability to compare information across plans. Since 2004, the principles have been updated several times with the most recent update in May 2011. As authorized under Title IV of the Higher Education Act of 1965, as amended, the Department of Education provides assistance to help millions of students and families meet the costs of higher education through grants, work-study, and loans. A substantial portion of this federal financial aid is awarded based on the amount of a student’s financial need, which is generally the difference between a student’s cost of attendance and an estimate of his family’s ability to pay these costs, known as the expected family contribution (EFC). In addition to the student’s income and assets, parents’ income and assets are also used to determine the student’s EFC unless the student is classified as independent. Independent students have their income and assets included in the EFC and their spouses’ income and assets, if applicable. Several criteria are used to determine if a student is independent, such as the student’s age, and if he or she is married or separated, enrolled in a master’s or doctoral degree program, or serving on active duty in the military, among other things. To apply for federal financial aid, students and, in the case of dependent students, parents submit information on income, assets, and the number of children enrolled in college through the Free Application for Federal Student Aid (FAFSA). This information is then used to determine the student’s eligibility for federal student aid by calculating the EFC through a process known as federal methodology, which is set out in statute. In terms of assets, figure 3 shows the information required by the FAFSA regarding the net worth of students’ and parents’ investments, which includes savings in 529 plans along with other investments such as Coverdells, money market funds, stocks, and mutual funds (for a full copy of the FAFSA see app. II). States and institutions may also offer financial aid. To determine the amount of such aid, some states and institutions choose to gather information in addition to what is required by the FAFSA. One form used by some institutions is the College Board’s PROFILE form. The PROFILE asks for information not included on the FAFSA, such as home equity and medical expenses, as well as more detail about information that is included on the FAFSA. The institutions may then use an individualized institutional methodology to determine the student’s EFC for institutional financial aid. According to the 2010 Survey of Consumer Finances (SCF), less than 3 percent of U.S. families had 529 plans or Coverdells, a similar but less often used education savings account. Even among families who acknowledged upcoming education expenses, 529 plans were not widely used. Of the approximately 25 percent of families who said they expected major education expenses in 5-10 years, about 7 percent of them had 529 plans or Coverdells. Similarly, of the approximately 18 percent of families who reported saving for education was a priority, only about 9 percent had 529 plans or Coverdells. 529 plans are also less commonly used than other savings vehicles among those saving for college. For example, a 2010 Sallie Mae survey found that most parents saved for college in general savings accounts or certificates of deposit and, of those who did invest, more used general investment vehicles than 529 plans. Based on our analysis of SCF data, the median amount in 529 plan or Coverdell accounts was about $14,700. Families with 529 plans or Coverdells typically had much more wealth than families without these accounts, according to our analysis of SCF data. Based on our analysis of the 2010 SCF, we estimate that the median financial asset value for families with 529 plans or Coverdells was about $413,500, which is about twenty-five times the median financial asset value for families without 529 plans or Coverdells (about $15,400).retirement assets than other families. Of families with 529 plans or For example, families with 529 plans or Coverdells had more Coverdells, about 94 percent had retirement assets, such as those in 401(k) accounts or traditional pensions. In contrast, approximately 49 percent of families without 529 plans or Coverdells had these retirement assets. Further, the median value of retirement assets was much greater for those with 529 plans or Coverdells. Specifically, the median value in retirement accounts was about $213,600 for families with 529 plans or Coverdells, while the median value for families without 529 plans or Coverdells was about $40,300. A larger share of families with 529 plans or Coverdells (27 percent) also believed they will have more than enough retirement income from pensions and Social Security to maintain current living standards than the share of families without 529 plans or Coverdells (16 percent), which may put them in a better position to save for college. In 2012, we reported similar findings using data from the 2007 SCF. GAO-12-560. Officials in every state and most experts and representatives we interviewed identified tax benefits, fees, and investment options as some of the most important features consumers consider when choosing whether or not to participate in a 529 plan and, if so, which plan to choose. These features vary by state and plan. All states offer at least one plan and many offer a combination of college savings (either direct- sold, advisor-sold, or both) and prepaid plans. For example, 14 states offer a direct-sold plan only, 22 states offer both direct-sold and advisor- sold plans, and 6 states offer all three plan types: direct-sold, advisor- sold, and prepaid. The popularity of direct-sold college savings plans has grown over time and in 2011 total assets were essentially evenly split between those and advisor-sold plans. States offer a range of tax benefits for 529 plans, and these benefits are a primary incentive to investing in a 529 plan, according to many state officials we interviewed. In addition to earnings growing tax-deferred, our analysis of CSPN data shows the majority of states with an income tax offer some form of benefits: 33 offer a tax deduction and 3 offer a nonrefundable tax credit to residents who participate in their state’s Five states also extend benefits to residents who participate in any plan. state’s plan. Almost all states limit tax benefits to the account owners, but one state extends those benefits to grandparents, aunts, and uncles who contribute to the plan. Officials in some states we interviewed said they provide additional tax benefits, for example, one state offers an exemption from the state inheritance tax. Others allow contribution amounts that exceed the annual deduction limit to be carried over to the following year’s return. Various fees and expenses may be associated with 529 plans, including administrative and investment fees. Administrative fees, which are charged by the state and/or the program manager hired by the state, cover administration of the 529 program, including customer service and marketing. Investment fees are charged by the investment company to manage the funds. The aggregate of these administrative and investment fees is often referred to as “annual asset-based fees,” which are expressed as a percentage of the fund’s average net assets. In addition, advisor-sold plans may also charge a “sales load”—that is, a fee paid to the selling broker when the fund is purchased or redeemed—and direct- sold and advisor-sold plans may also charge participants additional fees for services such as enrolling or changing the account owner. Fees among 529 plans vary widely; total annual asset-based fees among plans nationwide ranged from 0 percent to 1.97 percent for direct-sold plans and 0 percent to 2.78 percent for advisor-sold plans, as of July 2012. As seen in table 2, there is variation among states in both administrative and investment fees. Such variation occurred even among states with similar administrative structures. For example, among three of the states we reviewed where most administrative functions were conducted in-house, one state charged administrative fees of between 0.44 percent and 0.46 percent of the balance annually, another charged between 0.15 percent and 0.20 percent, and a third charged no administrative fees, instead covering operational costs and salaries through an annual state appropriation. Investment fees also varied: for example, underlying mutual fund fees ranged from 0 percent to 1.82 percent of the balance annually, depending on the type of investment option a participant chooses. For advisor-sold funds, sales loads also varied, ranging from 0 percent to 5.75 percent, in part based on the fund class. In addition, among the five states we reviewed, four did not charge an enrollment or application fee, while one charged $25, although the fee may be waived through promotions to encourage participation. 529 plan fees remain higher than fees for similar mutual funds an investor might purchase outside of a 529 plan. According to a 2011 study by Morningstar, 529 plan mutual funds charged, on average, an additional 0.31 percent of the account balance annually in investment fees compared with their respective mutual fund categories in the open market. The administrative fees charged by most 529 funds raise the cost even higher. However, Morningstar does note that fees for 529 plans have declined in recent years and officials at the majority of state plans we interviewed told us they have taken steps to reduce fees – for example, by renegotiating program manager contracts, using competitive bidding for program management, or consolidating functions in-house rather than using a program manager. As we have previously reported, fees are one of many factors participants should consider when investing because even a small fee increase can significantly decrease savings over time. State plans offer a variety of investment options to 529 college savings plan participants. Plans in the states we reviewed, for example, include up to 17 different investment options, including age-based, static, and customized portfolios, to cater to participants’ various levels of risk tolerance and investment sophistication. Age-based options were generally the most popular and, according to state officials we interviewed, may appeal to investors who might have more limited investment experience or a lower risk tolerance. One state plan we reviewed also offers a customized option for participants who seek more control over their investments, which allows them to designate their own allocations in funds such as stocks and bonds. For more risk-averse participants, some states also offer a FDIC-insured investment option or one that in some other way guarantees the investment’s principal. To help investors determine which plan best meets their needs, officials we interviewed in two states said their states provide risk assessment information through customer call centers. One state developed a risk tolerance questionnaire to explain investment scenarios, while the other had a representative ask informal questions to help potential investors assess their own risk level. Families can also choose to invest in prepaid plans, which were offered in three of the five states we reviewed. These plans also vary in fees, payment options, and cost. Two plans, for example, charged an annual administrative fee of just under 0.50 percent and the third charged no annual fee. In terms of payment options and costs, two states we interviewed offered prepaid plans by academic periods or units that can be used to pay for future tuition costs with the option of paying in lump sum or through a monthly payment program. According to state officials, the cost of these prepaid plans is generally determined by forecasting future tuition and fees at different types of schools (4-year, community college, etc.), given a number of actuarial assumptions on tuition inflation and anticipated investment return. One state, for example, offered a contract to cover four years of college costs for a child currently under age five at a lump sum of $56,600 and another state offered a similar contract for just under $66,500. A third state we reviewed does not offer units or contracts, but allows participants to contribute any amount to the plan. When the participant withdraws the funds for qualified educational expenses, they will receive the amount they contributed adjusted by a tuition inflation value. Families encounter a number of barriers as they consider saving for college: they may struggle with making saving a priority, and for those who do plan to save, many do not know 529 plans exist as a savings option. Additionally, once families decide to invest in a 529 plan they may have trouble understanding how it works and the variation across plans may affect their ability to select one that best meets their needs (see fig. 5). Families may encounter a variety of barriers saving for college, such as insufficient income, underestimating the cost of college, and misconceptions about financial aid availability, but selected states are taking steps to help address these barriers. A 2010 national survey published by Sallie Mae found that while nearly nine out of ten parents expected their child would attend some form of higher education, only three out of five parents of college-bound children have saved or invested for their oldest child’s education. First, many families may not save because they lack adequate income or have competing financial priorities. The same Sallie Mae survey reported that 68 percent of those who are not saving cited a lack of money as a major reason.this as a challenge: for example, one state official said that the economic downturn has affected some families who are reluctant to make deposits or participate in a 529 plan because they may need to choose between paying their mortgage and saving for college. In terms of competing priorities, two industry representatives we interviewed stressed that retirement should be a higher priority than saving for college. Officials from one state added when a family’s budget shrinks or the economy is uncertain, families reduce college rather than retirement savings. Furthermore, a few industry representatives said families should consider using other tax-deferred savings vehicles where funds could be used for multiple purposes, such as retirement and education. The states we selected to review have adopted strategies to expand participation among lower income families who may have limited resources to allocate towards savings, including offering matching programs, low minimum initial contributions, and less risky investment options. For example, in 2011 a family of 4 earning less than $44,700 would qualify, according to plan documentation. dollar for dollar, up to $400 annually per beneficiary for up to 4 years. In addition to increasing participation, officials from one state plan noted that the matching program can also help minimize student loans and reduce the amount students will have to work while in school. An ongoing experiment conducted by the Center for Social Development also found a positive impact on the number of 529 plan accounts for families who were automatically enrolled in a state-owned 529 account with a matching program in one state. In addition to participating in the automatically opened account, families in the treatment group were offered an additional $100 to open a private account. These families opened private 529 accounts at a higher rate (17 percent of families with a match compared to 2 percent of those in the control group without the incentives), and deposited more into those accounts. While matching programs may have positive results, two states we reviewed reported challenges with funding and awareness. One state’s program had not been authorized since 2008 and officials in another state said their enrollment remained low despite being open to all participants because the state’s 529 marketing budget was eliminated. Low or No Minimum Initial Contributions: Low or no minimum initial contributions and fee waivers may also help increase participation among low-income families, according to state officials and others we interviewed. Nationally, minimum initial contributions range from $10 to $5,000, according to our analysis of CSPN data; however, the majority of states require an initial contribution of $25 or less. Two states allow participants to open an account with any amount. Officials in one state reported that keeping the initial deposit amounts low can also help facilitate one of their main goals: to help spur the mental commitment and habit to save. Less Risky Investment Options: Officials from many states we reviewed said they offer investment options that pose less risk to the investor, which can appeal to low- to moderate-income families. One state, for example, partnered with two local banks to provide a FDIC-insured option to target families who might otherwise save in the bank’s savings account. According to an official from the plan’s banking partner, clients with more assets often use financial planners and are aware of 529 plans, while the FDIC-insured option was designed for those without financial planners and who use the bank’s more traditional products. According to state officials, most of the states we reviewed are not tracking participant’s demographic information such as income, however, making the success of these efforts for low-income families difficult to assess. Second, in addition to insufficient income, some families may not save because they procrastinate or underestimate the true cost of college, according to officials from most of the states we reviewed. Some parents may not budget money to save for college due to a lack of understanding about what college really costs or they become overwhelmed and do nothing, officials at one state 529 plan said. To address these challenges, selected state 529 plans have adopted financial literacy programs and marketing strategies emphasizing the importance of saving even a small amount early and often. To target families with younger children, two states provide materials to parents of newborns through the hospital or direct mail and two states work with elementary schools to distribute materials on the states’ 529 plans. Some states also establish contribution deadlines linked to certain benefits, such as discounted enrollment, or provide incentives to families who contribute during certain times of the year. To prevent families from feeling overwhelmed about college costs, one state has focused its marketing on saving a small amount each month, $25, to help reduce the student’s future debt, instead of focusing on the total cost of college. Sallie Mae, August 2010. impact of a difficult economic climate, constraints on endowments, and tighter budgets. In response, many of the states we reviewed have attempted to address misconceptions about the financial aid process. For example, one state lists common myths about 529 plans on its website, explaining that approximately 60 percent of federal financial aid comes in the form of loans, a debt the family must repay. The site encourages families to save even in small amounts to offset the amount of debt the family will incur. For those who are saving for college, awareness that 529 plans exist as a savings option is a challenge to participation, according to officials in most of the states we reviewed. In addition, among parents who are saving for college, one study found that almost half are unfamiliar with 529 plans. An additional 4 percent volunteered that they had never heard of the plan or did not know what it was. financial planners, according to many state officials we interviewed; therefore, awareness may be a particular challenge for low-income families who generally do not have access to such resources. Further, some state officials and industry representatives we interviewed encountered families with misperceptions about how 529 plans work, such as not understanding they can invest in plans outside their home state or use savings at any college or university. For example, officials from two states reported families mistakenly believe prepaid plans can only be used at an in-state institution. Sallie Mae, August 2010. marketing and communication difficult, according to officials we interviewed in two states. Marketing officials from one state told us consumers requested more information on 529 plans, but it was difficult to communicate information about the complex plans in a simple, consumer- friendly way. Officials in another state noted that plan complexity and a lack of clear information can discourage families from researching and enrolling in a plan. Many state officials and some academic experts and industry representatives reported that simplifying the information available to consumers might keep families from feeling overwhelmed. Because MSRB rules do not apply to state issuers when they market their direct- sold 529 plans, CSPN developed a set of disclosure principles to help states provide consistent information. The voluntary principles contain recommendations to help consumers understand plans and compare various features, such as fees, tax issues, and risk. While disclosures have been helpful, according to one expert who consults with a number of states, there is room for improvement: disclosures could be more rigorous in ensuring that consumers are informed of less-costly options within a state if they exist and should cover information on prepaid plans, which is currently not standardized. When comparing direct-sold disclosure documents across states we reviewed, we found that the five states generally adhered to the CSPN disclosure principles and contained consistent information a consumer could compare. Three states, however, were missing some information that could be helpful to consumers, such as information on the risk of state tax law changes and a statement that 529 plans should only be used to save for qualified higher education expenses. The structure of federal and state tax benefits, a primary incentive for some 529 plan investors, can also affect participation as they may not be as helpful to low-income families, according to some academic experts, industry representatives, and state officials we interviewed. Low-income families with low or no tax liability see less benefit from federal tax benefits and may see no benefit from nonrefundable state tax credits provided to 529 plan investors. According to a 2009 Treasury report, families saving in 529 plans may need to carefully consider whether their child will go to college because the penalty incurred if the funds are not used for qualified education expenses may outweigh the tax benefits for low-income families. In addition, Treasury noted most states do not extend tax benefits to residents investing in out-of-state plans, limiting competition. As a result, families have a strong incentive to choose their home state plan even if other plans offer preferable investment choices. In 2009, Treasury recommended states eliminate this “home-state bias” to provide more investment options to consumers, more intense competition between plans, and potentially lower fees. According to an annual report from one state that extends its tax benefits to residents who invest in other states’ plans, doing so, when other states do not, puts the home state plan at a competitive disadvantage. State officials explained that this policy results in other plans marketing their products in the state. Residents, therefore, may be unaware of their home state plan’s benefits, according to the report. Finally, the fact that account holders may change their investment option only one time per year may affect participation in 529 plans, according to state officials and some industry representatives we interviewed. Officials from one financial services company advocated removing any limits on changing investment choices beyond those imposed by the financial services company sponsoring the fund, as is the case with 401(k) plans and individual retirement accounts. Another industry representative observed some 529 plan participants changing their account beneficiary solely because it would allow them to change their investment options. However, the representative cautioned that participants should not change their investment options too frequently; many experts advocate that investors are best served by sticking with a long-term investment plan. The extent to which savings in 529 plans, or other investments, affect how much a family is expected to contribute to the cost of college—the federal expected family contribution (EFC)—generally depends on the family’s amount of assets. Education incorporates the amount of specific types of assets into various calculations to determine the EFC. However, in two calculations, families who meet certain criteria are either not expected to contribute to the cost of college (automatic zero EFC) or they qualify for a simplified calculation. In both cases, assets, including savings in 529 plans, are not included in the calculation of the EFC. According to the 2007-2008 NPSAS, about a quarter of families who filed FAFSAs met these criteria. In other calculations, assets, including savings in 529 plans, may affect the EFC to different extents depending on whether students are dependent on their parents or are independent with dependents of their own. For dependent students, between 2.64 percent and 5.64 percent of parental assets may be included in the EFC as described below: First, the parents report the net worth (current value minus debt) of their investments (see fig. 6 #1),contribution from assets is calculated, an amount known as the “education savings and asset protection allowance” is subtracted but before the total (see fig. 6, #2). This allowance is designed to help protect a portion of the parents’ assets. Second, 12 percent of any parental asset amount that exceeds the education savings and asset protection allowance is used to determine the contribution from assets that will be considered in the final EFC calculation (see fig. 6 #3). Third, this contribution from assets is added to the parents’ available income to determine their adjusted available income (see fig. 6, #4). Fourth, a marginal rate, from 22 percent up to a maximum of 47 percent, is applied to the sum of the parents’ available income and contributions from assets (known collectively as the adjusted available income) to determine their EFC (see fig. 6, #5). As a result, the amount of net parental assets, including savings in 529 plans, that can be included in the EFC ranges from 2.64 percent to 5.64 percent. Most state financial aid offices also consider savings in 529 plans as assets. According to the 2009-2010 National Association of State Student Grant and Aid Programs survey, 35 states reported that they used the federal methodology for determining the EFC for state aid. However, some states that reported using federal methodology for their primary student needs analysis also indicated they provide special treatment for state 529 college savings or prepaid plans when determining student eligibility for aid. Specifically, seven states that used federal methodology to award their state aid excluded the state’s 529 college savings plan and three excluded the state’s prepaid plan from their calculation for state aid. Of the officials in the six state financial aid offices we interviewed, none said they considered assets to a greater extent than the FAFSA and a few said their state took specific steps to exempt savings in these plans from consideration. Specifically, officials in two states said there is language in their 529 plan authorizing legislation that exempts plan savings when determining a student’s eligibility for state financial aid. Officials in another state said their state issued a regulation stating that savings in a 529 plan would not affect state grant eligibility for residents attending nonprofit higher education institutions. An official in a fourth state said the legislature changed its higher education authorization language so that students would still be eligible to receive a state scholarship even if they enrolled in the state’s prepaid plan. Institutional financial aid practices vary with regard to assets, but those with more aid to award may gather additional information about a family’s financial status, according to some representatives of national financial aid organizations and institutional officials we interviewed.schools require students to provide information in addition to the FAFSA, such as filling out the College Board’s PROFILE form or submitting tax returns. One official said the PROFILE provides more detailed information on a family’s assets, such as home equity and retirement account balances, which helps the university prioritize the students with the most need. Institutional officials we interviewed said their schools considered savings in 529 plans as assets, even if they used different methodologies to calculate their financial aid or included the assets at different percentages. Officials at two institutions said they did not consider savings in 529 plans beyond how they are already reported by the family on the FAFSA. An official at a third institution said the school does not collect any additional information on savings in 529 plans beyond what is requested on the FAFSA even though the school requires families to fill out the PROFILE form and uses an institutional methodology to award its financial aid. The remaining institutional officials said they collect additional family financial information when calculating student aid, but consider savings in 529 plans similarly to the family’s other assets. Specifically, one institutional official said her school uses the PROFILE form to gather more detailed information about a family’s financial situation. Even so, 529 plan savings do not affect a student’s need any differently than other assets, she said, which are assessed by the institution at about five percent of their value. Additionally, she said 529 plan assets are considered parental assets even if they are reported as student assets because the school assesses parental assets at a lower percentage. An official at another institution said her school assesses assets at around 20 percent of their value when calculating the EFC for institutional aid. Officials’ opinions varied on whether savings in 529 plans should affect financial aid, but many said families’ concerns that these savings will have an adverse effect are common. One state financial aid official said it would be helpful if 529 plan savings were excluded entirely from the calculation because including them can be a deterrent to saving. She said her office often encounters families who feel penalized for saving because they believe the students without savings receive financial aid. Likewise, a 529 plan official said regardless of whether the student’s financial aid will be reduced by savings in a 529 plan, there is the perception that it will. In contrast, one institutional financial aid official said savings in 529 plans should not be treated any differently than other assets because the need analysis is meant to determine the family’s fair share of college expenses and excluding 529 plans would be counter to this aim. One researcher we interviewed found that the issue may be most important for those families who are on the margin of receiving federal financial aid. Regardless of the perceived effect 529 plan savings may have on financial aid, some of the officials we interviewed said they encourage families to save for college because much of the aid they may be offered could be in the form of loans, so saving will generally be in the student’s long-term financial interest. As currently designed, 529 college savings plans benefit a small percentage of U.S. families. In general these families tend to be wealthier than others. It is not clear whether the $1.6 billion in federal tax expenditures that these plans represent strategically targets limited federal resources. Although 529 plans do help some families save for college, families with less income and who are uncertain about whether their children will attend college may have less incentive to invest resources in 529 plans than in other forms of savings. In addition, the tax benefits attractive to a higher-income family do not offer as much benefit to a family with lower tax liability. Questions about who benefits from this tax expenditure occur in an environment of long-term fiscal challenges and difficult choices about how the federal government allocates limited resources. Reviewing 529 plans in conjunction with the other billions of dollars in federal educational assistance provided through tax expenditures, credits, and deductions could help Congress determine whether this program is meeting its goals. Similar to GAO’s prior work on higher-education related tax expenditures, our analysis of 529 college savings plans was not able to address all questions that could inform future policy choices regarding 529 plans. For example, what is the purpose of the federal tax benefits provided through 529 plans? Are the goals and objectives clearly defined and measurable? Who is the target population for 529 plans and does the current structure provide appropriate incentives for that population? How do the 529 plan federal tax benefits interact with other programs, such as federal financial aid and other higher education tax benefits and savings vehicles? Consideration of these questions could facilitate continued congressional oversight of this tax expenditure. We provided a draft of this report to Education, Treasury, and IRS for comment. The agencies provided technical comments that were incorporated, as appropriate. We are sending copies of this report to the Secretary of Education, Secretary of the Treasury, Commissioner of Internal Revenue, relevant congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at [email protected] or 202-512-6806. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our review examined: (1) the percentage and characteristics of families enrolling in 529 plans, (2) the plan features and other factors that affect participation in 529 plans, and (3) the extent to which savings in 529 plans affect financial aid awards. To answer these research objectives, we analyzed government data; interviewed state 529 plan officials from select states as well as industry representatives and academic experts; reviewed plan documents and analyzed industry data; conducted a literature review; interviewed federal, state, and institutional financial aid officials; and reviewed Department of Education (Education) and Internal Revenue Service (IRS) documents as well as relevant federal laws, regulations and guidance. We assessed the reliability of the data we used by reviewing documentation, interviewing knowledgeable officials, and conducting electronic testing on relevant data fields. We found the data we reviewed reliable for the purposes of our analyses. We conducted this performance audit from November 2011 to December 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To determine the percentage and characteristics of families enrolling in 529 plans, we reviewed data from the 2010 Survey of Consumer Finances (SCF); the 2007-2008 National Postsecondary Student Aid Study (NPSAS); and 2007-2010 Statistics of Income (SOI) federal tax data. The 2010 SCF, 2007-2008 NPSAS, and 2010 SOI were the most recent data available at the time of our engagement, so to ensure consistency in reporting we adjusted all dollar amounts from previous years’ data to 2010 dollars. Each of these three data sources (SCF, NPSAS, and SOI) are based on probability samples and estimates are formed using the appropriate estimation weights provided with each survey’s data. Because each of these samples follows a probability procedure based on random selections, they represent only one of a large number of samples that could have been drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 2.5 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Unless otherwise noted, all percentage estimates based on the SCF, NPSAS, and SOI have 95 percent confidence intervals that are within 5 percentage points of the estimate itself, and all numerical estimates other than percentages have 95 percent confidence intervals that are within 5 percent of the estimate itself. For our analysis of the percentage and characteristics of families who held 529 plans, we relied primarily on restricted data from the 2010 SCF. SCF is a triennial survey sponsored by the Board of Governors of the Federal Reserve System (Federal Reserve) to provide detailed information on the finances of U.S. households. The SCF sample of 6,492 households represented approximately 118 million households in 2010. It collects detailed financial characteristics on an economically dominant single individual or couple (married or living as partners) in a household, which we refer to as a family for the purposes of this report. For our analysis, we aggregated financial information so that, unless otherwise noted, all SCF estimates are for the family rather than the individual survey respondent. We did not restrict our analysis to families with children, in part, because 529 plans can be used for nearly anyone, including one’s child, grandchild, niece, nephew, and oneself. However, about 88 percent of families with 529 plans or Coverdell Education Savings Accounts (Coverdells), a similar education savings vehicle, had children 25 years of age or younger living with them. Our estimates for 529 plans included Coverdells because Federal Reserve officials said respondents did not always distinguish between the two account types; therefore, we did not separate these responses because of data reliability concerns. However, the officials indicated that a larger share of the SCF respondents reported having 529 plans than Coverdells. Further, using SOI data, we estimate that in 2010 approximately 85 percent of tax filers who took a distribution from either a 529 plan or a Coverdell reported distributions from a 529 plan while 14 percent reported distributions from a Coverdell and 1 percent reported distributions from both. We wrote an analysis program that the Federal Reserve ran using their restricted SCF dataset to separate information on Medical Savings Accounts and Health Savings Accounts, which had been included in the 2010 public dataset with 529 plans and Coverdells. Federal Reserve officials modified some resulting information to protect the privacy of survey respondents, for example by rounding dollar amounts. Using SCF, we generated estimates on the percentage and characteristics of families enrolled in 529 plans or Coverdells and of families not enrolled in these plans. We examined family characteristics such as wealth (financial assets), income, education, and race or ethnicity. To calculate financial assets, we used the methodology the Federal Reserve uses to produce variables for its published Bulletin articles. This methodology included assets held in checking, savings, and brokerage accounts, certificates of deposit, mutual funds, stocks, bonds, life insurance, retirement accounts, and other vehicles such as 529 plans. Assets held in retirement accounts included those in defined contribution plans (e.g. a 401(k), individual retirement account, or thrift savings plan) as well as in traditional pensions or defined benefit plans. To calculate income, we used the family’s self-reported total income. To report the family’s highest educational attainment, we reviewed the education of each respondent and his or her partner or spouse and included whichever was higher. We reported information on the respondent’s race or ethnicity, which does not necessarily indicate the race or ethnicity of other family members. We used 2007-2008 NPSAS data to develop a similar demographic profile for college students and generate other estimates on college costs and financial aid amounts. NPSAS is a comprehensive study by Education that examines how students and their families pay for higher education. It includes nationally representative samples of 113,535 undergraduates, 12,585 graduate students, and 1,581 first-professional students enrolled any time between July 1, 2007 and June 30, 2008. The NPSAS data are based on administrative records and student interviews, and NPSAS includes survey results from both students who received financial aid and those who did not. While we used NPSAS to develop a demographic profile for college students similar to the one we developed for the general population using SCF, families with 529 plans are not directly comparable to families of college students. For example, while our estimates using SCF are for all families (including families with children in college, with children not in college, and with no children), our estimates for families using NPSAS are exclusively for families with a current college student. In NPSAS, families include the student and the student’s parents (if the student is dependent) or the student’s spouse and dependents (if the student is independent). Further, the population of NPSAS students’ families is not the same as the population of families with a child in college because a family may have more than one student in college in a given year. Consequently students’ family characteristics derived from NPSAS are not directly comparable to family characteristics based on the SCF, though for the purposes of our report we use similar terminology to describe them. Similar to our analysis of SCF, we generated estimates of the characteristics of college students’ families— education, and race or ethnicity. To report income, we including income,calculated the total income of (1) the student’s parents (if the student was dependent) and (2) the student and the student’s spouse (if the student was independent). To report the family’s highest educational attainment, we reviewed the education of each student’s mother and father and included whichever was higher. We also reported information on the student’s race or ethnicity, which does not necessarily indicate the race or ethnicity of other family members. We also developed separate estimates for students who are considered either dependent on their parents or independent for financial aid purposes. We also used NPSAS to generate other estimates related to the cost of college and amount of financial aid awards. First, we estimated the median annual cost of attendance at 4-year public and private non-profit institutions. This included tuition and fees, room and board, transportation, and personal expenses, though the estimate is valid only for students who attended one institution. Second, we estimated the percentage of students who received grants and loans, as well as the median amount of these grants and loans and the percent and amount of college expenses remaining. Third, we generated estimates for the proportion of students who filled out the Free Application for Federal Student Aid (FAFSA) and, for those who did fill out the FAFSA, the proportion who met certain criteria to have assets excluded from the federal expected family contribution (EFC) and the proportion whose assets affected the EFC. Finally, we calculated the percentage of students who received state and institutional financial aid. We also analyzed 2007-2010 taxpayer data from SOI to determine the extent to which taxpayers used distributions from 529 plans for qualified education expenses and how the tax savings from these plans were distributed across income levels. The SOI individual tax return file is a stratified probability sample of income returns filed with the IRS. The SOI sample of 308,583 returns represented approximately 143 million tax returns filed for 2010. We combined data from the SOI individual tax file with information from the Form 1099-Q. A 529 plan must file a Form 1099-Q with the IRS and the account owner or beneficiary each time a taxpayer receives a distribution from a 529 plan account. This form includes information on the amount of the distribution and the earnings (or loss) on the distribution. When taxpayers receive a Form 1099-Q, they must determine if the distribution was used for qualified education expenses. If the distribution, or any portion of it, was nonqualified, the earnings portion is subject to taxes and, in some cases, a penalty. The taxpayer determines the amount of taxes and penalty owed on the nonqualified distribution by completing Form 5329, which is contained in the individual tax return file. By combining information from the 1099-Q with information in the individual tax return file, we identified the percentage of taxpayers who reported nonqualified distributions that were subject to a penalty. We also used SOI data to estimate the tax savings by using the National Bureau of Economic Research’s (NBER) TAXSIM Model, a microsimulation model of U.S. federal and state income tax systems. TAXSIM calculates estimated liabilities under U.S. federal and state income tax laws from actual tax returns that have been prepared for public use by the Statistics of Income Division of the IRS. Our analysis of the tax savings from 529 plans excludes returns with a filing status of married filing separately. To provide information on the factors that affect participation we interviewed officials from the following five state 529 plans and their industry partners: Louisiana, Michigan, Pennsylvania, Utah, and Virginia. We used College Savings Plan Network (CSPN) data to select states that represented a variety of plan types (direct-sold, advisor-sold, and pre- paid), offered a number of features (i.e., various state tax benefits, state matching program), and were geographically diverse. We also used suggestions provided by academic experts and industry representatives to inform our selection as well as to provide information on 529 plan participation. We interviewed academic researchers (including the Center for Social Development), industry regulators (the Financial Industry Regulatory Authority and the Municipal Securities Rulemaking Board), financial services companies (American Funds and UPromise), financial experts (such as Financial Research Corporation and Morningstar), College Savings Plan Network, Savingforcollege.com, and consumer interest groups (Investment Company Institute and the American Association of Individual Investors). We analyzed CSPN data on state 529 plans to provide a national overview of plan features, such as fees and state tax benefits. Biennially, states submit plan data to CSPN through an online system to be posted on the CSPN website. CSPN provided us with data on each state as of July 2012. We analyzed the data for every state for both direct-sold and advisor-sold plans on the following features: whether the state offers a matching grant program, whether the state offers tax deductions for contributions and the amount, whether the state offers tax credits for contributions and the amount, types of investment options offered, total contribution limits, and required initial contribution amounts. We also analyzed the following fee categories: program manager fee, state fee, annual account maintenance fee, miscellaneous fee, annual distribution fee, estimated underlying fund expenses, total annual asset-based fees, maximum deferred sales charge, and minimum initial sales charge. Further, we compared CSPN disclosure principles with direct-sold plan disclosure documentation for the five states we interviewed. We reviewed the extent to which the selected states incorporated elements of the CSPN disclosure principles and whether plan documentation was easily comparable across states. Specifically, we compared whether the state documents contained eleven elements outlined in the principles, including: a summary of key features, an assessment of the individual summary features, a statement of any guarantee by the state issuer or the state, information on state tax treatment and other benefits, information that the state offers more than one plan, fee descriptions, and investment risks, among others. These elements were chosen based on discussions with states and experts who identified plan fees, tax benefits, and investment options as some of the most important features consumers consider when choosing whether or not to participate in a 529 plan. In addition to recording whether states have disclosed the information listed above, we assessed whether any information was missing, where the information was located in the document, and any other observations about the ability to find and understand plan information. We reviewed studies conducted by academics, researchers, industry representatives, and federal agencies on why families choose to participate in 529 plans and what features might serve as barriers or incentives. We identified literature published since 2006, when Congress passed the Pension Protection Act of 2006, Pub. L. No. 109-250, which made permanent the tax-exemption on 529 plan distributions used for qualified education expenses. Our review included scholarly/peer reviewed material, government reports, hearings and transcripts, trade/industry articles, association/nonprofit/think tank publications, and working papers. We searched information sources such as EconLit, ProQuest, ERIC, PolicyFile, WorldCat, ECO, PapersFirst, ArticleFirst, and Academic OneFile. These online sources are nationally recognized databases that index and abstract research literature. We selected search terms to capture literature that specifically addressed 529 plans, college savings plans, qualified state tuition programs, and prepaid tuition. Of the 32 studies we identified, 12 studies met the following criteria: 1) included information on plan features in specific states, 2) addressed the consequences for consumers of choosing one type of 529 plan over another, 3) identified barriers or incentives for consumers to choose 529 plans, 4) included data collected by states on plan participation, and/or, 5) included information on plan disclosures to consumers. All studies cited in the report were reviewed by at least two GAO analysts. Studies that included statistical methods were reviewed by a GAO statistician and social science analyst. All studies were reviewed for methodological soundness and to ensure that any limitations associated with study methodologies were conveyed to readers in our report text, footnotes, or this appendix. To understand the extent to which savings in 529 plans affect federal financial aid awards, we interviewed Education officials in the Office of Postsecondary Education. We also reviewed relevant statutory provisions, the FAFSA, the Federal Student Aid Handbook, and other Education documents related to calculating the EFC. To understand the extent to which savings in 529 plans are considered in state financial aid calculations, we interviewed officials from state financial aid offices in six states. To select the state financial aid offices, we used information from a 2009-2010 survey by the National Association of State Student Grant and Aid Programs to identify states that indicated they used a financial aid formula other than the federal methodology in their primary needs analysis and/or provided special treatment for state 529 plans. For report consistency, we selected the same states selected for 529 plan site visit locations to the extent possible (i.e., where the data supported the selection based on the criteria). We interviewed representatives in the following state financial aid offices: Louisiana Office of Student Financial Assistance, Michigan Office of Scholarships and Grants, New York Higher Education Services Corporation, Pennsylvania Higher Education Assistance Agency, Utah Higher Education Assistance Authority, and State Council of Higher Education for Virginia. We also selected six institutions from the states whose financial aid offices were selected for interviews. To obtain a national perspective on institutional financial aid and determine the best method for selecting the individual institutions, we interviewed representatives at several financial aid organizations including the Association of Private Sector Colleges and Universities, the College Board, the National Association of Student Financial Aid Administrators, the National Association of Independent Colleges and Universities, and the American Association of Community Colleges. In these interviews, some officials said that schools with larger endowments were likely to require families to provide additional information, such as that required on the College Board’s PROFILE application, to award their institutional financial aid. We matched the 2012-2013 College Board’s list of institutions that use the PROFILE application with Education’s 2009-2010 Integrated Postsecondary Education Data System to calculate endowment amounts per student at public and private non-profit four-year institutions. We also reviewed the list of schools that participate in the Private 529 Consortium and selected at least one school that was also part of this group. One state did not have an institution that used the PROFILE application so we reviewed websites of postsecondary schools in that state to identify a school that collected data in addition to the FAFSA. We interviewed representatives at the following institutions: Xavier University of Louisiana, University of Michigan, St. Lawrence University, Swarthmore College, University of Utah, and University of Richmond. Michelle Sager, Acting Director, Education, Workforce, and Income Security Issues, 202-512-6806 or [email protected]. In addition to the contact named above, Gretta Goodwin (Assistant Director), Amy Anderson, Rachel Beers, and Laura Henry contributed to all aspects of this report. Also making key contributions were Carl Barden, James Bennett, Nora Boretti, Jessica Botsford, Jason Bromberg, Alicia Cackley, Melinda Cordero, Patrick Dudley, Shannon Finnegan, Kim Frankena, Mark Glickman, David Lewis, Ashley McCall, John Mingus, Mark Ramage, MaryLynn Sergent, George Scott, Walter Vance, Kathleen van Gelder, and Michelle Loutoo Wilson.
Paying for college is becoming more challenging, partly because of rising tuition rates. A college savings plan can be an option to help meet these costs. To encourage families to save for college, earnings from 529 plans--named after section 529 of the Internal Revenue Code--grow tax-deferred and are exempt from federal income tax when they are used for qualified higher education expenses. In fiscal year 2011, the Department of the Treasury estimated these plans represented $1.6 billion in forgone federal revenue. Managed by states, over one hundred 529 plan options were available to families nationwide as of July 2012. The number of 529 plan accounts and the amount invested in them has grown during the past decade. GAO was asked to describe (1) the percentage and characteristics of families enrolling in 529 plans, (2) plan features and other factors that affect participation in 529 plans, and (3) the extent to which savings in 529 plans affect financial aid awards. GAO analyzed government data, including the SCF. This survey's 529 plan data are combined with Coverdells, so the SCF estimates used in the report include both 529 and Coverdell data. GAO also analyzed National Postsecondary Student Aid Study data; conducted interviews with federal and state officials, industry and academic experts, and state and institutional higher education officials; reviewed 529 plan and Department of Education documents; conducted a literature review; and reviewed relevant federal laws, regulations, and guidance. A small percentage of U.S. families saved in 529 plans in 2010, and those who did tended to be wealthier than others. According to the Survey of Consumer Finances (SCF), less than 3 percent of families saved in a 529 plan or Coverdell Education Savings Account (Coverdell)--a similar but less often used college savings vehicle also included in the SCF. While the economic downturn may have reduced income available for education savings, even among those families who considered saving for education a priority, fewer than 1 in 10 had a 529 plan (or Coverdell). Families with these accounts had about 25 times the median financial assets of those without. They also had about 3 times the median income and the percentage who had college degrees was about twice as high as for families without 529 plans (or Coverdells). States offer consumers a variety of 529 plan features that, along with several other factors, can affect participation. Some of the most important features families consider when choosing a 529 plan are tax benefits, fees, and investment options, according to experts and state officials GAO interviewed. These features can vary across the state plans. For example, in July 2012, total annual asset-based fees ranged from 0 to 2.78 percent depending on the type of plan. 529 plan officials and experts GAO interviewed said participation is also affected by families' ability to save, their awareness of 529 plans as a savings option, and the difficulty in choosing a plan given the amount of variation between plans. Selected states, however, have taken steps to address these barriers. For example, to address families' ability to save, particularly for low-income families, some states have adopted plans that include less risky investments, have low minimum contributions, and match families' contributions. Savings in 529 plans affect financial aid similarly to a family's other assets. For federal aid, a family's assets affect how much it is expected to contribute to the cost of college. If the amount of those assets exceeds a certain threshold, then a percentage is expected to be used for college costs. For example, for students who are dependent on their parents, the percentage of parental assets, including savings in 529 plans, that the family may be expected to contribute ranges from 2.64 to 5.64 percent. Many states and selected institutions also treat 529 plan savings the same as other family assets. However, a few states provide them with special treatment, such as exempting those funds from their financial aid calculation. GAO is not making any recommendations in this report.
The Coast Guard, a maritime military service within DHS, has a variety of responsibilities including port security and vessel escort, search-and- rescue, and polar ice operations. To carry out these and other responsibilities, the Coast Guard operates a number of vessels, aircraft, and information technology programs. The Coast Guard intends to further meet these responsibilities through ongoing efforts to modernize or replace assets through the Deepwater program. The Coast Guard’s current acquisition portfolio, at $27 billion, includes 17 major acquisition programs and projects and is managed by the Coast Guard Acquisition Directorate, CG-9. Major acquisitions—level I and level II—have life-cycle cost estimates equal to or greater than $1 billion (level I) or from $300 million to less than $1 billion (level II). Major acquisition programs are to receive oversight from DHS’s acquisition review board, which is responsible for reviewing acquisitions for executable business strategies, resources, management, accountability, and alignment to strategic initiatives. The board also supports the Acquisition Decision Authority in determining the appropriate direction for an acquisition at key acquisition decision events. At each Acquisition Decision Event, the Acquisition Decision Authority approves acquisitions to proceed through the acquisition life-cycle phases upon satisfaction of applicable criteria. Additionally, the Coast Guard and other DHS components have Component Acquisition Executives responsible in part for managing and overseeing their respective acquisition portfolios. DHS has a four-phase acquisition process: (1) Need phase—Define a problem and identify the need for a new acquisition; (2) Analyze/Select phase—Identify alternatives and select the best option; (3) Obtain phase—Develop, test, and evaluate the selected option and determine whether to approve production; and (4) Produce/Deploy/Support phase—Produce and deploy the selected option and support it throughout the operational life cycle. Table 1 provides further information about the Coast Guard major acquisition programs. Since 2001, we have reviewed Coast Guard acquisition programs and have reported to Congress, DHS, and the Coast Guard on the risks and uncertainties inherent in its acquisitions. In our June 2010 report on selected DHS major acquisitions, we found that acquisition cost estimates increased by more than 20 percent in five of the Coast Guard’s six major programs we reviewed. For example, the National Security Cutter’s acquisition cost estimate grew from an initial figure of $3.45 billion to $4.75 billion from 2006 to 2009—a 38 percent increase. Moreover, five of six programs faced challenges due to unapproved or unstable baseline requirements, and all six programs experienced schedule delays. The Rescue 21 search-and-rescue program, for example, had both unapproved or unstable baseline requirements and schedule delays. Several of our reports have focused on the Coast Guard’s Deepwater acquisition program. Most recently, in our July 2010 report on the program, we found that the Coast Guard had generally revised its acquisition management policies to align with DHS directives, was taking steps to address acquisition workforce needs, and was decreasing its dependence on the Integrated Deepwater Systems contractor by planning for alternate vendors for some assets, and to award and manage work outside of the Integrated Coast Guard Systems contract for other assets. We also have ongoing work on the status of the Deepwater program that is related but complementary to this report and will result in a separate published report later this year. The Coast Guard updated its overarching acquisition policy since we last reported in July 2010 to better reflect best practices and respond to our prior recommendations, and to more closely align its policy with the DHS Acquisition Management Directive Number 102-01. For example, in November 2010, the Coast Guard revised its Major Systems Acquisition Manual, which establishes policy and procedures, and provides guidance for major acquisition programs. Revisions included a list of the Executive Oversight Council’s roles and responsibilities; aligning roles and responsibilities of independent test authorities to DHS standards, which satisfied one of our prior recommendations; a formal acquisition decision event before a program receives approval for low-rate initial production, which addresses one of our prior recommendations; and a requirement to present an acquisition strategy at a program’s first formal acquisition decision event. The Coast Guard’s Blueprint for Continuous Improvement (Blueprint) was created after the Coast Guard began realigning its acquisition function in 2007 and is designed to provide strategic direction for acquisition improvements. The Blueprint uses GAO’s Framework for Assessing the Acquisition Function at Federal Agencies and the Office of Federal Procurement Policy’s Guidelines for Assessing the Acquisition Function as guidance, but also includes quantitative and qualitative measures important to the acquisitions process. Through these measures, the Coast Guard plans to gain a clearer picture of its acquisition organization’s health. The Blueprint was revised in October 2010 to formalize the acquisition directorate’s integration with the Coast Guard’s mission support structure and includes plans to annually evaluate the Blueprint’s measures. The Coast Guard developed the Blueprint as a top-level planning document to provide acquisition process objectives and strategic direction as well as to establish action items, but DHS’s Inspector General expressed concern that the agency did not prioritize action items and consider the effects of delayed completion of action items on subsequent program outcomes. For example, the 2010 Inspector General report found that by the end of fiscal year 2009, 23 percent of assigned action item completion dates slipped without determining the effect on acquisition improvements. In response to the Inspector General’s report, the Coast Guard has taken steps to prioritize its action items; however, it is too soon to tell the outcome of these actions. These policies were updated to align with DHS guidance and reflect best practices. Coast Guard officials also attribute acquisition reforms to the Coast Guard’s efforts to assume responsibilities for all major acquisition programs. We previously reported in 2009 that the Coast Guard acknowledged its need to define systems integrator functions and assign them to Coast Guard stakeholders as it assumed the systems integrator role. As a result, the Coast Guard established new relationships among its directorates to assume control of key systems integrator roles and responsibilities formerly carried out by the contractor. For example, according to Coast Guard officials, the Coast Guard formally designated certain directorates as technical authorities responsible for establishing, monitoring, and approving technical standards for all assets related to design, construction, maintenance, logistics, C4ISR, life-cycle staffing, and training. In addition, the Coast Guard is developing a Commandant’s Instruction to further institutionalize the roles and responsibilities for Coast Guard’s acquisition management. Beyond updating its major acquisition policies and guidance, the Coast Guard Acquisition Directorate also increased the involvement of its Executive Oversight Council to facilitate its acquisition process. Coast Guard officials stated that the council, initially established in 2009 with an updated charter in November 2010, provides a structured way for flag-level and senior executive officials in the requirements, acquisition, and resources directorates, among others, to discuss programs and provide oversight on a regular basis. As the Coast Guard began assuming the system integrator function from the Deepwater contractor in 2007, it believed it needed a forum to make trade-offs and other program decisions especially in a constrained budget environment; according to officials, the council was established in response to that need. Coast Guard officials noted that major programs are now required to brief the formalized council annually, prior to milestones, and on an ad hoc basis when major risks are identified. According to Coast Guard documentation, from fiscal year 2010 through the first quarter of fiscal year 2011, the council met over 40 times to discuss major programs. For example, the council held more than five meetings to discuss the Offshore Patrol Cutter’s life-cycle costs and system requirements, among other issues. The discussions are captured at a general level in meeting minutes and sent to the Coast Guard Acquisition Directorate for approval. The Coast Guard has made progress in reducing its acquisition workforce vacancies since April 2010. As of November 2010, the percentage of vacancies dropped from about 20 percent to 13 percent or from 190 to 119 unfilled billets out of 951 total billets. Acquisition workforce vacancies have decreased, but program managers have ongoing concerns about staffing program offices. For example, the HH-65 program office has funded and filled 10 positions out of an identified need for 33 positions. Although the program has requested funding for an additional 8 billets for fiscal year 2012, due to the timing of the request, the funding outcome is unknown as of April 2011. Similarly, the Interagency Operations Center program is another office affected by acquisition workforce shortages. According to the Coast Guard, as of March 2011, the program office has funded and filled 11 positions out of the 27 needed. For some of these positions, the Interagency Operations Center program uses staff from the Coast Guard’s Command, Control, and Communications Engineering Center for systems engineering support; however, workforce shortages remain. Program officials may face additional challenges in hiring staff depending on the location of the vacancies within the program’s management levels. For example, a program official stated that vacant supervisory positions must be filled first before filling remaining positions because lower-level positions would not have guidance for their activities. Figure 1 shows the status of the Coast Guard’s acquisition workforce vacancies as of November 2010. We reported in January 2010 that the Coast Guard faces difficulty in identifying critical skills, defining staffing levels, and allocating staff to accomplish its diverse missions. An official Coast Guard statement from 2009 partially attributed the challenge of attracting staff for certain positions to hiring competition with other federal agencies. In February 2010, we reported on the Coast Guard’s long-standing workforce challenges and evaluated the agency’s efforts to address these challenges. For example, we reported that while the Coast Guard developed specific plans to address its human capital challenges, the fell short of identifying gaps between mission areas and personn plans el needed. The Coast Guard has taken steps to outline specific areas of workforce needs, including developing a human-capital strategic plan and commissioning a human-capital staffing study published in August 2010, but program managers continue to state concerns with the Coast Guard’s ability to satisfy certain skill areas. For example, the August 2010 human- capital staffing study stated that program managers reported concerns with staffing adequacy in program management and technical areas. To make up for shortfalls in hiring systems engineers and other acquisition workforce positions for its major programs, the Coast Guard uses support contractors. As of November 2010, support contractors constituted 25 percent of the Coast Guard’s acquisition workforce. While we have stated the risks in using support contractors, we reported in July 2010 that the Coast Guard acknowledged the risks of using support contractors and had taken steps to address these risks by training its staff to identify potential conflicts of interest and by releasing guidance regarding the role of the government and appropriate oversight of contractors and the work that they perform. The Coast Guard has also made progress ensuring that program management staff received training and DHS certifications to manage major programs. For example, according to Coast Guard officials, in December 2010, the Coast Guard was 100 percent compliant with DHS personnel certification requirements for program-management positions. We have previously reported that having the right people with the right skills is critical in ensuring that the government achieves the best value for its spending. Most of the Coast Guard’s major acquisition programs continue to experience challenges in program execution, schedule, and resources. For program execution, the Coast Guard reported in December 2010 that 12 of its 17 major programs face moderate to significant risk in one or more execution metrics such as technical maturity or logistics, which required management attention. Of these, seven programs have carried these risks for 1 year or more. For example, the HC-130J program has reported logistics-assessment risks requiring management attention for 3 years. Regarding schedule challenges, the Coast Guard reported in December 2010 that 10 of its 12 major programs with approved acquisition program baselines exceeded schedule objective or threshold parameters. For example, the Maritime Patrol Aircraft HC-144A program exceeded its schedule because it delayed a production decision in order to complete initial operational testing and evaluation per a DHS acquisition review board decision. As this program was already 4 years behind schedule, added schedule delays may require the Coast Guard to extend a legacy aircraft’s service life, which may incur additional costs to sustain it. Major Coast Guard programs also face resource risks. As of December 2010, 12 of the Coast Guard’s 17 major programs face moderate to significant risk in project resource metrics such as budgeting and funding. For 9 of these programs, risks have been reported for more than 1 year. In addition, four Coast Guard programs, HC-130H aircraft, Nationwide Automatic Identification System, C4ISR, and HH-60 helicopter, have notified DHS of acquisition program baseline breaches. The Coast Guard’s unrealistic acquisition budget planning also exacerbates the challenges Coast Guard acquisition programs face. We have previously reported that the Coast Guard faced risks from unrealistic funding levels and that its reliance on sustained high funding levels in an environment of budget constraints puts program outcomes at risk if projected funds are not received. In December 2010, the Coast Guard reported that 8 of the 17 major program offices were updating their acquisition program baselines due in part to reduced funding in the fiscal year 2011-2015 Capital Investment Plan. According to Coast Guard acquisition officials, when a Capital Investment Plan has funding levels that are lower than what a program planned to receive, then the program is more likely to have schedule breaches and other problems. For example, in November 2010 the HC-130H program reported a schedule breach to DHS due in part to reduced Capital Investment Plan funding projections for fiscal years 2011-2015 and had to revise its schedule parameters to reflect the lower projected funding levels. This also occurred in the Nationwide Automatic Identification System major acquisition program. The program had an estimated cost growth of approximately $32 million due to reduced out-year funding in the fiscal year 2009-2013 plan, and after further funding reductions in the fiscal year 2011-2015 plan, the program subsequently deferred efforts to update the program baseline. According to Coast Guard officials, the Coast Guard is currently reevaluating the program’s system requirements and associated project cost, schedule, and performance objectives. In 2011, DHS acquisition oversight officials informed the Coast Guard that future breaches in other programs would be almost inevitable as funding resources decrease. Figure 2 illustrates Coast Guard major acquisition programs facing execution, schedule, resource, and budget planning challenges as of December 2010. Progr experiencing inability de to redced projected fnding level. The Coast Guard developed several action items in its October 2010 update to its Blueprint for Continuous Improvement to address budget planning challenges. According to Coast Guard acquisition officials, the most important step is for Coast Guard leadership to establish a priority list for the major programs based on actual acquisition budgets received in prior years, and then to make trade-offs between programs to fit within historical budget constraints. The Coast Guard developed an action item to assess the percentage of program funding profiles that fit into the Capital Investment Plan. Specifically, the Blueprint indicates that the Coast Guard will establish and implement a process to compare and report the extent to which each individual program’s funding fits into the Capital Investment Plan funding parameters. Further, the Coast Guard plans to analyze and regularly report gaps in these funding profiles to the Coast Guard’s acquisition leadership. The Coast Guard also identified the need to promote funding stability in the Capital Investment Plan and intends to evaluate that effort by establishing a mechanism and baseline to measure Capital Investment Plan stability by comparing project funding against previous, current, and future 5-year Capital Investment Plans. However, while the Coast Guard officials stated their intention to use these metrics to elevate the priority and funding issues to leadership, it is too soon to tell the outcome of these steps. In a separate ongoing review, we are further assessing the Coast Guard’s management of program costs and other budget issues. According to the Coast Guard, it currently has 81 interagency agreements, memorandums of agreement, and other arrangements in place primarily with DOD agencies to support its major acquisition programs. Each of the 17 major Coast Guard acquisition programs leverages DOD support, primarily from the Navy. According to Coast Guard officials, they rely on DOD experience and technical expertise because they both procure similar major equipment, including ships and aircraft. Examples range from acquiring products and services from established DOD contracts to using engineering and testing expertise from the Navy. Some major programs also receive assistance from other DHS components or other agencies on a more limited basis. For example, the Rescue 21 program partnered with the Federal Aviation Administration at two sites to use its land and towers to install search and rescue capabilities. The Secretary of Homeland Security is authorized to enter into agreements with other executive agencies and to transfer funds as required. This authority has been delegated to the Commandant of the Coast Guard. Interagency agreements include a description of the general terms and conditions that govern the relationship between agencies, and specific information on the requesting agencies’ requirement to establish a need and to authorize the transfer of funds. According to Coast Guard officials, Coast Guard interagency agreements with DOD typically include a memorandum of agreement or a memorandum of understanding with a DOD agency. A memorandum of agreement is a document that defines the responsibilities of, and actions to be taken by, each of the parties so that their goals will be accomplished. A memorandum of understanding is a document that describes broad concepts of mutual understanding, goals, and plans shared by the parties. Interagency agreements also are typically funded by military interdepartmental purchasing requests in which the requiring agency must include a description of the end items purchased and the funding data for acquiring these supplies or services. Interagency agreements can be for direct, assisted, or other than assisted acquisitions. In direct acquisitions, the requesting agency places orders against another agency’s indefinite-delivery contracts, such as task and delivery order contracts, while assisted acquisitions use the acquisition services of a servicing agency. Other than assisted acquisitions utilize the internal expertise of a servicing agency. In 2001, the Chief of Naval Operations and the Commandant of the Coast Guard agreed to build a national fleet that combines Navy and Coast Guard forces to maximize effectiveness across all naval and maritime missions. More than 50 of the Coast Guard’s agreements with DOD leverage support from the Department of the Navy. Moreover, Coast Guard and Navy officials have noted an increase in Navy involvement to support the Coast Guard’s major acquisition programs since the Coast Guard assumed the Deepwater lead systems integrator role in 2007. Examples of updated support agreements in place with Navy entities include the following: A 2011 interagency agreement with the Naval Sea Systems Command (NAVSEA) to support Coast Guard acquisition programs in program management, design, technical assistance, cost estimating, and other support. A 2010 memorandum of agreement with the Navy’s Commander, Operational Test and Evaluation Forces, allows the Coast Guard to request the Navy to serve as the operational test authority for Coast Guard major acquisition programs. Two memorandums of agreement / interagency agreements in 2009 with the Naval Air Systems Command (NAVAIR), which allow Coast Guard major acquisition programs to leverage Navy services and aviation program office assistance including: planning, technical assistance, cost estimation, warfare modeling and analysis, requirements definition, risk management, and integrated logistics support. A 2009 memorandum of agreement with the Navy’s Space and Naval Warfare Systems Command Pacific that allows Coast Guard programs to request and obtain technical and other support services for the research and development, design, engineering, integration, acquisition, test and evaluation, installation, and life-cycle support of Coast Guard systems. Most Coast Guard major acquisition programs leverage Navy expertise, in some way, to support a range of testing, engineering, and other program activities. For example, the Fast Response Cutter program used Naval Surface Warfare Center Dahlgren services to help with topside design and electromagnetic testing. In another instance, the Coast Guard used Naval Surface Warfare Center Carderock division to test and evaluate boats and provide technical expertise for the Response Boat-Medium program. According to Coast Guard officials, the Coast Guard also collaborated with Navy cost estimators and contracting staff to prepare for negotiations to award the November 2010 production contract for the fourth National Security Cutter. In another instance, the Navy provided engineering and technical support for the Coast Guard’s MH-60 helicopter program. Further, the Navy’s Operational Test and Evaluation Command is currently supporting testing activities for 11 Coast Guard programs. According to Coast Guard and DOD officials, the Coast Guard has achieved cost savings from using DOD contracts through quantity discounts and reduced unit prices when Coast Guard orders are combined with orders from other DOD departments. Additional benefits include reductions in contracting administrative costs, and expedited processing times. According to Coast Guard officials, examples include the following: The Coast Guard’s HC-130J program coordinated C-130J contracting efforts through the Air Force acquisition office’s contract rather than contracting directly with the aircraft manufacturer and benefited from discounts in ordering along with other DOD agencies. In addition, by using the standard configuration of the C-130J common among U.S. government users, the Coast Guard benefited from cost savings in aircraft sustainment. The Coast Guard obtained Navy systems, such as the SPQ-9B Radar, at a reduced cost for Coast Guard cutter programs. The National Security Cutter program used Navy contracts to provide and install ultra high frequency radios and electronic warfare systems. The Rescue 21 program placed search-and-rescue sensors on Army, Air Force, Navy, and Marine Corps facilities, which reduced recurring Coast Guard costs. The HH-65 program office reduced procurement costs by approximately 12 percent or $25,000 by purchasing a range of subsystems and components, such as a cockpit display unit, from an Army contract. The Coast Guard has also identified opportunities to further leverage DOD resources. In 2009, the Navy and Coast Guard conducted a commonality study that identified, among other things, 17 commonality opportunities with near term potential for mutual benefit that required little or no up- front investment to execute. Typically they require only the modification of a policy document. Key opportunities identified included the following: Acquisition personnel exchanges with NAVSEA to promote collaboration and leveraging of cross-service capabilities in the acquisition community. Leveraging existing Navy logistics management systems during the development of the Coast Guard Logistics Information Management System to reduce developmental costs. Coast Guard program managers largely rely on informal contacts to learn about the agreements in place with DOD to support program activities. Many Coast Guard program managers we met with indicated that they became aware of DOD resources that could be leveraged for their programs through contacts with their DOD counterparts or by other means. According to Coast Guard officials, program managers also learn about another agency’s expertise or resources through word of mouth, market research, head of contracting activity discussions, conferences, or networking channels. While this interaction has led to Coast Guard programs successfully leveraging DOD resources, Navy officials told us that in the past Navy leadership was not always fully aware of support being provided to the Coast Guard, and as such was unable to ensure that the right Navy entities were conducting the work and that the results provided to the Coast Guard met Navy standards. NAVAIR and NAVSEA have each established a liaison assigned to the Coast Guard to facilitate information and knowledge sharing about Navy capabilities and contracts available to Coast Guard programs. For example, NAVAIR and NAVSEA liaisons serve as Coast Guard on-site experts, engage in dialogue with Coast Guard, and work to increase Coast Guard awareness of Navy resources. However, without current knowledge of existing interagency agreements, Coast Guard program managers may not be aware of the liaisons and their role in working with the Navy. Relying on informal contacts may also present missed opportunities for greater cooperation and leveraging of DOD resources. For example, the Coast Guard has 50 or more agreements with the Navy, some of which are broad agreements with major Navy commands such as NAVSEA or NAVAIR, while others are specific agreements with Navy agencies such as the Naval Ordnance Safety and Security Office, Naval Surface Warfare Center Dahlgren Division, and the Naval Supply Systems Command. Interagency agreements may call for a designated point of contact for Coast Guard program managers to contact, but program managers do not have a systematic way to gain insight into the details of the agreements. According to Coast Guard contracting officials, the Coast Guard has recently begun to develop a database of interagency agreements with DOD and other agencies that Coast Guard programs can leverage to support acquisition activities. However, due to limited attention devoted to this issue, Coast Guard officials noted that only 5 of the approximately 81 interagency agreements are in a data system accessible to program staff. These officials also noted that a database is needed to avoid duplicative efforts and to ensure program staff are aware of existing agreements, including the latest versions of agreements specifying updated products and services available. The Coast Guard has continued to make progress in strengthening its capabilities to manage its acquisition portfolio by updating acquisition policies and practices as well as reducing vacancies in the acquisition workforce. As the Coast Guard improves its acquisition management capabilities, it may find that adjustments and changes will be necessary in light of how well its major acquisition programs are progressing. The Coast Guard has leveraged DOD contracts to help support its major acquisition programs, but reliance on informal contacts may also present missed opportunities for greater cooperation and leveraging of DOD resources to help save scarce resources, manage programs risks, and support positive acquisition outcomes. To provide Coast Guard program management staff with greater access to updated information about agreements in place with DOD to facilitate leveraging support for major acquisition programs, we recommend that the Commandant of the Coast Guard take steps to ensure all interagency agreements are captured in a database or other format and make this information readily accessible to program staff. We provided a draft of this report to the Coast Guard, DHS, and DOD. DHS provided oral comments stating that it concurred with the recommendation. The Coast Guard and DOD provided technical comments, which we incorporated into the report as appropriate. We are sending copies of this report to interested congressional committees, the Secretary of Homeland Security, the Secretary of Defense, and the Commandant of the Coast Guard. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgments are provided in appendix II. The Coast Guard Authorization Act of fiscal year 2010, as amended, specified that “Within 180 days after the date of enactment of the Coast Guard Authorization Act for fiscal year 2010, the Comptroller General of the United States shall transmit a report to the appropriate congressional committees that—(1) contains an assessment of current Coast Guard acquisition and management capabilities to manage Level 1 and Level 2 acquisitions; (2) includes recommendations as to how the Coast Guard can improve its acquisition management, either through internal reforms or by seeking acquisition expertise from the Department of Defense (DOD); and (3) addresses specifically the question of whether the Coast Guard can better leverage Department of Defense or other agencies’ contracts that would meet the needs of Level 1 or Level 2 acquisitions in order to obtain the best possible price.” To determine the Coast Guard’s current management capabilities for its major acquisition programs, we evaluated the Coast Guard’s acquisition policies and processes, status of its acquisition workforce, and execution of its major programs since we last reported on the Coast Guard’s acquisitions and acquisition management in June and July 2010. We reviewed Coast Guard acquisition governance, policy, and process documents such as the Coast Guard’s Major Systems Acquisition Manual and Blueprint for Continuous Improvement that have been issued, implemented, or updated since July 2010. We also interviewed Coast Guard and other Department of Homeland Security (DHS) acquisition officials to analyze and explain the factors behind the acquisition governance changes as well as how changes have been implemented to date through review of meeting briefings, minutes, and subsequent decision memos. To evaluate the Coast Guard’s status of its acquisition workforce, we reviewed Coast Guard information on government, contractor, and vacant positions to identify any progress made in reducing acquisition workforce vacancies and filling critical positions since July 2010 as well as any positions that continue to be challenging to fill. Additionally, we obtained and analyzed Coast Guard program staff information to determine specific programs experiencing staffing shortfalls and conducted interviews to supplement Coast Guard information and determine the extent to which staffing shortfalls affect program execution. To evaluate the Coast Guard’s execution of its major programs we analyzed information on the status of those programs since July 2010 through reviews of general acquisition status reports (e.g., Quarterly Acquisition Reports to Congress and Quarterly Performance Reports), program briefings, and acquisition process documents (e.g., Acquisition Program Baselines) to determine how many programs have cost, schedule, or performance issues based on criteria in the Major Systems Acquisition Manual. Further, we analyzed additional program performance, schedule, cost, and funding information from the Capital Investment Plan, breach memos, and acquisition decision memos to identify funding stability issues and the extent to which funding issues were factors leading to breaches in established program baselines. We also corroborated program information with interviews of Coast Guard program staff and interviews with external DHS stakeholders, such as acquisition oversight and cost analysis staff in the acquisition program management directorate. Moreover, we examined and identified best practices from prior GAO reporting on Coast Guard funding stability as a factor in program continuity and successful outcomes. To determine the extent to which the Coast Guard leverages DOD and other agency contracts or expertise to support its major acquisition programs, we examined the Coast Guard’s interagency agreements and identified the agencies the Coast Guard most commonly used to support major acquisition programs. On the basis of this analysis, we interviewed Coast Guard officials, as well as DOD, Navy, and Air Force officials about resources provided to support Coast Guard major acquisition programs. We also discussed with Coast Guard officials any current efforts to update the agreements. Using this analysis, we identified examples of cost savings and other benefits for selected Coast Guard acquisitions. Further, we reviewed relevant GAO and DHS Inspector General reports. We corroborated testimonial information from interviews with Coast Guard acquisition and program staff by reviewing contracts, agreements, and other documents that show the amount of resources expended by the Coast Guard for DOD-provided goods and services and by interviewing DOD officials at the Naval Sea Systems Command, Naval Air Systems Command, Space and Naval Warfare Systems Commands, and the Department of the Air Force. We conducted this performance audit from January 2011 to April 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Other individuals making key contributions to this report were John Neumann, Assistant Director; William Russell; Jessica Drucker; Sylvia Schatz; Kenneth Patton; and Morgan Delaney Ramaker. Coast Guard: Deepwater Requirements, Quantities, and Cost Require Revalidation to Reflect Knowledge Gained. GAO-10-790. Washington, D.C.: July 27, 2010. Department of Homeland Security: Assessments of Selected Complex Acquisitions. GAO-10-588SP. Washington, D.C.: June 30, 2010. Coast Guard: Observations on the Requested Fiscal Year 2011 Budget, Past Performance, and Current Challenges. GAO-10-411T. Washington, D.C.: February 25, 2010. Coast Guard: Better Logistics Planning Needed to Aid Operational Decisions Related to the Deployment of the National Security Cutter and Its Support Assets. GAO-09-497. Washington, D.C.: July 17, 2009. Coast Guard: As Deepwater Systems Integrator, Coast Guard Is Reassessing Costs and Capabilities but Lags in Applying Its Disciplined Acquisition Approach. GAO-09-682. Washington, D.C.: July 14, 2009. Coast Guard: Observations on Changes to Management and Oversight of the Deepwater Program. GAO-09-462T. Washington, D.C.: March 24, 2009. Coast Guard: Change in Course Improves Deepwater Management and Oversight, but Outcome Still Uncertain. GAO-08-745. Washington, D.C.: June 24, 2008. Coast Guard: Status of Selected Assets of the Coast Guard’s Deepwater Program. GAO-08-270R. Washington, D.C.: March 11, 2008. Coast Guard: Status of Efforts to Improve Deepwater Program Management and Address Operational Challenges. GAO-07-575T. Washington, D.C.: March 8, 2007. Coast Guard: Status of Deepwater Fast Response Cutter Design Efforts. GAO-06-764. Washington, D.C.: June 23, 2006. Coast Guard: Changes to Deepwater Plan Appear Sound, and Program Management Has Improved, but Continued Monitoring Is Warranted. GAO-06-546. Washington, D.C.: April 28, 2006. Coast Guard: Progress Being Made on Addressing Deepwater Legacy Asset Condition Issues and Program Management, but Acquisition Challenges Remain. GAO-05-757. Washington, D.C.: July 22, 2005. Coast Guard: Preliminary Observations on the Condition of Deepwater Legacy Assets and Acquisition Management Challenges. GAO-05-651T. Washington, D.C.: June 21, 2005. Coast Guard: Deepwater Program Acquisition Schedule Update Needed. GAO-04-695. Washington, D.C.: June 14, 2004. Contract Management: Coast Guard’s Deepwater Program Needs Increased Attention to Management and Contractor Oversight. GAO-04-380. Washington, D.C.: March 9, 2004. Coast Guard: Actions Needed to Mitigate Deepwater Project Risks. GAO-01-659T. Washington, D.C.: May 3, 2001.
The Coast Guard manages a broad $27 billion major acquisition portfolio intended to modernize its ships, aircraft, command and control systems, and other capabilities. GAO has reported extensively on the Coast Guard's significant acquisition challenges, including project challenges in its Deepwater program. GAO's prior work on the Coast Guard acquisition programs identified problems in costs, management, and oversight, but it also recognized several steps the Coast Guard has taken to improve acquisition management. In response to the Coast Guard Authorization Act of 2010, GAO (1) assessed Coast Guard capabilities to manage its major acquisition programs, and (2) determined the extent to which the Coast Guard leverages Department of Defense (DOD) and other agency contracts or expertise to support its major acquisition programs. GAO reviewed Department of Homeland Security (DHS) and Coast Guard acquisition documents, GAO and DHS Inspector General reports, and selected DOD contracts; and interviewed Coast Guard, DHS, and DOD officials The Coast Guard continues to strengthen its acquisition management capabilities by updating acquisitions management policies and reducing acquisition workforce vacancies, but significant challenges remain. In November 2010, the Coast Guard updated its acquisition policy to further incorporate best practices and respond to prior GAO recommendations, such as aligning independent testing requirements with DHS policies and formalizing the Executive Oversight Council to review programs and provide oversight. Additionally, the Coast Guard reduced acquisition workforce vacancies from 20 to 13 percent from April to November 2010, but shortfalls persist in hiring staff for certain key areas such as systems engineers, and some programs continue to be affected by unfilled positions. While the Coast Guard has increased its acquisition management capabilities, most Coast Guard major acquisition programs have ongoing cost, schedule, or program execution risks. Additionally, unrealistic budget planning for the Coast Guard's acquisition portfolio exacerbates these challenges and will likely lead to more program cost and schedule issues. The Coast Guard has several actions under way to further improve acquisition policies and workforce shortfalls, as well as address budget planning issues, but it is too soon to tell whether the actions will be effective. The Coast Guard leveraged DOD contracts to purchase products and services or to gain expertise in support of major acquisition programs. The Coast Guard has entered into approximately 81 memorandums of agreement and other arrangements primarily with DOD, which has experience and technical expertise in purchasing major equipment such as ships and aircraft, to support its major acquisition programs. Examples range from acquiring products and services from established DOD contracts to obtaining engineering and testing expertise from the Navy. According to the Coast Guard, leveraging DOD contracts has led to cost savings for Coast Guard acquisition programs. For instance, the Coast Guard received price discounts for C-130J aircraft by coordinating contracting efforts with the Air Force rather than contracting directly with the aircraft manufacturer. In another example, Coast Guard officials used Navy cost estimators and contracting staff in the November 2010 production contract for the National Security Cutter. At this point, Coast Guard program managers rely on informal contacts to learn about the agreements in place to support program activities, thus potentially limiting staff knowledge of DOD resources available. Coast Guard contracting officials only recently recognized the need to make DOD agreements available to program staff, but due to limited attention to this issue, only about 5 of the 81 agreements are currently accessible to program managers. GAO recommends that the Coast Guard take steps to ensure program staff have access to interagency agreements with DOD. DHS concurred with the recommendation.
To identify budgetary issues and concerns, we examined existing divestiture proposals in the United States, reviewed earlier GAO reportsand proposed legislation relating to asset divestiture in the United States, and conducted a literature search of budgetary issues related to privatization. We selected industrialized countries for our study that had a history of selling government-owned assets and for which we could gain access to appropriate staff and written information. Based on these criteria, we selected Canada, France, Mexico, New Zealand, and the United Kingdom. To review the divestiture experiences of these countries, we conducted telephone interviews with a wide range of government officials directly involved with divestiture and with experts outside of government who were responsible for specific aspects of divestiture, such as valuation. We also reviewed literature on privatization in the five countries and official government documents. We did not verify the accuracy of all of the information provided to us nor did we evaluate the sales’ relative success in achieving national goals. We concentrated our discussions on cross-cutting, rather than industry-specific, issues we had identified that would be of interest to the United States as it examines its own divestiture efforts. We conducted this work in Washington, D.C., from June 1995 through November 1995 in accordance with generally accepted government auditing standards. Privatization experts in each of the countries reviewed our material and we have incorporated their comments where appropriate. Privatization in the five countries we studied involves a number of complex steps. The government selects candidates for sale and determines which must be restructured prior to divestiture. The government may choose to create a central unit to be responsible for coordinating and implementing the transfer of ownership from the public to the private sector. To assist it in the privatization process, the government usually hires financial advisors. The advisors are expected to represent the government’s and the taxpayers’ interests, as well as provide guidance on how to value and sell the entity. The government’s overall objectives for privatization heavily influence how these steps are carried out. We found that the government’s goals, either for its overall privatization agenda or for individual privatization initiatives, influenced what entities would be privatized, how they would be valued, what type of sale would be used, and who would be eligible to purchase the entity. All the governments we contacted undertook privatization for a variety of reasons but all stated that they used privatization primarily to increase economic efficiency and reduce the size of the public sector. Most also stated that they used privatization to assist in reducing their public debt. Some governments clearly placed a higher priority on increasing economic efficiency, while others gave a higher priority to debt reduction. In the United Kingdom, privatization was grounded in the belief that, generally, the private sector could operate commercial enterprises more efficiently than the public sector. Because the decision to sell had already been made, the entity’s present value under continued government ownership was usually not estimated. The government was willing to sell an entity even if it would generate more money for the government as a public entity than the government would receive from its sale. The United Kingdom has also tried to increase share ownership of stock among the general public and has sold many entities through public offerings. The government has generally sold shares at a discount to employees and the public and, as an incentive, offered installment plans to pay for the shares. New Zealand undertook privatization to improve economic efficiency, reduce the government’s exposure to commercial risk, and decrease government debt. To help achieve these goals, sales were open to the largest possible number of bidders and foreign ownership of privatized assets was not restricted. We were told that privatization has greatly enhanced the performance of certain inefficient sectors of the economy such as the telephone industry, which in turn has helped to make privatization more acceptable to the public. During the 1980s and early 1990s, the Progressive Conservative government in Canada privatized government corporations to increase economic efficiency, reduce the demands that public enterprises exerted on government management and financial resources, and reduce government intervention in the economy. The size of the public debt has become a growing concern in Canada and, according to a public debt expert, the current Liberal government has begun to portray privatization as a way to alleviate the deficit and debt situation in Canada, particularly through two public offerings, the Canadian National Railroad and Petro Canada. In the past, Canada has limited foreign participation in asset sales, but the most recent public offering of Canadian National placed no restrictions on foreign ownership. According to documents provided by the French Treasury, the French government sought to develop the Paris financial market through its privatization program. The use of public offerings has enabled privatization to play a decisive role in shifting personal savings into the equity market. While preferential treatment is provided to French residents at the time of sale, there are few legal limits on share ownership in public offerings. Except for companies operating in the health, safety, and defense sectors, the only legal limit is that non-European Union investors may not acquire securities representing more than 20 percent of the company’s equity at the time of the initial sale. In subsequent sales, there are no limits on foreign purchases. Privatization in Mexico was part of a larger strategy to increase the efficiency and competitiveness of the economy and increase its credibility on the international markets. Debt reduction has been essential for Mexico, particularly to reestablish credibility with its creditors. To help meet its goals of increased economic efficiency and debt reduction, the Mexican government closed, merged, or sold 1,008 out of 1,155 public enterprises between 1982 and 1992. The government deposited most of the proceeds from privatization in a separate Contingency Fund and used the funds to retire government debt. While some of the entities that were privatized in Mexico were very profitable, many of the entities the government sold or shut down were money losers. The subsequent discontinuation of subsidies also helped to improve the government’s fiscal position. Mexico’s constitution identifies strategic areas that must remain within the domain of the federal government. The constitution has been amended over time, however, to permit certain strategic areas to be privatized. For example, railroads were strategic at one time and can now be privatized. And, while Petroleos Mexicanos (PEMEX)—the national oil company—remains a strategic firm, an official told us that the basic petrochemical operations of PEMEX are now being offered for sale. The Mexican government restricts the level of foreign capital that can be invested in the country, although foreign investment regulations were liberalized at the end of the 1980s and majority foreign ownership is allowed in most sectors. We were told that the level of foreign investment that should be allowed in each industry is currently being debated by the government. A central agency or commission holds primary responsibility for the management or oversight of the privatization process in all of the countries in this study. This structure has enabled a stable core of staff to develop expertise in managing the privatization process. It has also allowed the governments to implement a governmentwide approach to privatization. Table 2 shows the entities controlling the privatization process in each of the governments we studied. In the United Kingdom, a privatization unit within the Treasury oversees and assists the responsible ministry with the sale of public enterprises. The Treasury plays a coordinating role to ensure consistent decisions across individual privatizations. According to a Treasury official in the United Kingdom, the government’s privatization efforts have led it to recognize the value of having a group of dedicated officials oversee most divestitures. Canada had no central authority for the privatization process prior to the late 1980s; each ministry was responsible for developing its own privatization proposals. Because of dissatisfaction with the management and pace of privatization, a central authority, the Office of Privatization and Regulatory Affairs, was established in 1986. The current Crown Corporations and Privatization Sector group, which reports to both the Treasury Board and the Ministry of Finance, was created in 1991 to oversee the management and disposal of Crown corporations, which are wholly owned government corporations. The Treasury Board monitors the management of the budget and serves as a budget scorekeeper. The countries we studied generally converted (1) government agencies or functions into a corporate form prior to privatization or (2) primarily privatized entities already in a corporate form. The definition of a government corporation or enterprise varies from country to country. Government corporations are generally commercial in character, self-sustaining or potentially self-sustaining, and may be exempt from a variety of personnel and regulatory restrictions applicable to government entities. In some countries, government corporations pay taxes. New Zealand has used corporatization as a way to increase the efficiency and competitiveness of an entity while it remains within the government and as a stepping stone for privatization. We were told that New Zealand state-owned enterprises (SOE), which are entities that have been commercialized and corporatized, are very similar to their private sector counterparts. They pay taxes to the government, are not subject to government budget and personnel rules, must borrow from the private sector, and have private sector boards. The government, however, remains the sole shareholder. While the government does not guarantee the debt of SOEs, we were told that there is some concern that offshore debt holders may assume that some form of implicit guarantee exists. New Zealand primarily has privatized entities that have already been transformed into SOEs. The government in New Zealand has used corporatization as an opportunity to clean up an entity’s outstanding obligations prior to privatization. Experts stated that the performance of entities that were corporatized and then sold has been better than those that were not corporatized prior to privatization. These experts also said that the government learned it was much more difficult to privatize a department without the restructuring and debt reduction that corporatization engenders. The existing obligations and liabilities of a department complicate the sale and, as the entity has no track record as a commercial enterprise, it can be difficult to value. The United Kingdom has primarily sold nationalized industries, which are already in a corporate form. Canada has primarily divested Crown corporations. The governments in the United Kingdom and Canada have also begun to divest departmental activities. France has almost exclusively sold public enterprises, and the Mexican government sold either SOEs or their fixed assets. The valuation process is complex—it involves not only the mechanics of valuing the entity, but also determining the appropriate type of sale and the best financial and/or organizational structure for the entity at the time of divestiture. All of this occurs within the overriding context of the country’s privatization goals. Valuation is not an exact science. It requires a great deal of experience and depends to some extent on the professional judgment of those conducting the valuation. Different valuation methods may result in different ranges of expected values depending on, for example, the assumptions about the future performance of the entity, the expectation of future earnings, and the level of investor interest. Officials in most of the countries told us that because of this complexity, the centralized agencies responsible for the management of privatization hired financial advisors to assist with the valuation process. In the United Kingdom and France, the entity being sold often hired its own financial advisors to represent the entity’s interests, such as the desire for a generous capital structure. Neither the entity nor its advisors, however, took the lead in managing the sale process. All the governments we studied employed a combination of valuation techniques to estimate the value of the entity being sold and to forecast the proceeds. Most used present value analysis, but other approaches were also used to develop an overall valuation. In the governments we studied, the valuation process served a variety of goals. For example, valuation entered into some governments’ decisions about whether to sell an entity. Valuation was also used to determine the appropriate financial and organizational structure to maximize proceeds. The United Kingdom used valuation primarily to maximize proceeds because the decision to sell had already been made. In New Zealand, the government relied on valuation to determine whether to sell the entity to meet its goals of improved economic efficiency and debt reduction. Many of the countries also used valuation to determine a minimum acceptable price or a price range. We were told that, in general, none of the governments in this study included estimated future tax revenues from the entities sold in their estimate of future returns to the government because many of the entities already paid taxes and it is generally difficult to forecast tax revenues. France primarily uses valuation to gauge the market, minimize market risk, and maximize the proceeds from a sale. The French privatization law of 1993 specifies what will be privatized; the government, therefore, does not use valuation to compare the return to the government of retaining as opposed to selling the entity. Valuation is done using a variety of analyses, including net present value analysis. In determining whether to privatize an entity, New Zealand conducts studies to estimate the market value of the entity if it were sold compared to the returns accruing to the government from retaining the entity. The government’s advisors conduct a cash flow analysis to determine if the entity is worth more under government ownership or private ownership. When valuing the entity both under continued government ownership and as a private sector concern, the government uses a commercial discount rate appropriate for the industry in which the entity is located. If the returns from selling the entity do not exceed the returns from government ownership, then the entity generally is not sold. In Canada, the government uses valuation to assist in pricing rather than to determine whether or not to privatize an entity. Valuation is used to develop a range of expected proceeds and not a minimum acceptable price. The government has not generally sold entities where the sale would have been uneconomical, although money losing entities brought low returns. According to a privatization expert, the government has reduced the number of people who know the details of the privatization transactions during their final stages to help maintain the integrity of the process. In Mexico the government first determined whether an entity was indispensable; if the entity was not, the government closed, merged, or sold it. In cases where the entity was to be sold, the government used valuation to maximize the proceeds from the sale. We were told that the government often uses the current value of future cash flows to value an entity. The advisors develop a minimum reference point for the price the government should expect to receive for the entity. The majority of transactions have been at a price equal to or greater than the recommended minimum amount. Both the United Kingdom and New Zealand use “clawbacks” to address uncertainty in specific valuation situations, for example, where assumptions crucial to the valuation may change. Clawbacks in the United Kingdom have been used to protect the government (that is, the taxpayers) from new owners realizing unanticipated windfall profits after privatization from the sale of surplus property. A typical clawback may specify that if the entity sells a certain property over a period of 10 years for values at a specified amount greater than the original value, a portion of the proceeds will go to the government. The United Kingdom’s experience underscores the importance of identifying and valuing land owned by public entities which are to be sold. In some instances, such land has far less value to government than to the private sector, which may develop it. We were told that the United Kingdom takes a cautious approach when using clawbacks because it recognizes that their use can decrease the sale price as well as constrain the entity’s commercial behavior. For example, if the government “claws back” certain gains, it may reduce a firm’s incentive to find and use new productive resources. According to a government official, clawbacks have been used when the value of the property has increased after privatization, not when operating profits were higher than expected. In New Zealand clawbacks have been used in a broader range of situations. For example, according to documents from the New Zealand Treasury, the Petroleum Mining Licenses contract includes clauses whereby the government receives more money if oil prices rise above the benchmark levels used in the valuation or if reserves prove to be in excess of the present expected reserves estimate. New Zealand also used a clawback in the sale of the gas reticulation system. An official told us that the provision specified that if the share price increased to a specified amount by a certain date, then the company would pay the government a certain amount, but if the price decreased, then the government would buy back shares. The price did in fact decrease and the government had to buy back a percentage of the shares. The governments we studied used several of types of sales in their privatization efforts, including public offerings, private sales to companies or individual investors, and management/employee buyouts. The type of sale can be linked to the size of the entity being sold and the country’s financial markets. Public offerings are generally used in fairly well developed financial markets and for the sale of large assets with established financial track records. Significant administrative costs are typically associated with a public offering, such as developing the prospectus and marketing the sale. In situations where the entities to be sold are relatively small or lack a financial track record, or where the country’s financial markets are less developed, private sales are used. The United Kingdom and France used public offerings to increase share ownership and develop financial markets, respectively. Mexico has primarily used private sales because of the limited size of its financial markets; New Zealand has typically used private sales rather than public offerings, which, according to government officials in New Zealand, are more costly to administer and involve greater risk. Canada has relied on both public offerings and private sales, based primarily on the size of the entities being sold. According to officials in each of the five countries we spoke with, the governments used advisors for assistance in determining the most appropriate type of sale. The United Kingdom’s major privatizations have been through public offerings. According to the government’s financial advisors, public offerings develop the greatest price competition and thus allow the government to obtain the most value for the entity being sold. Initial public offerings were carried out with traditional methods, involving underwriters and fixed price offers. However, the United Kingdom has moved away from using underwriters and now uses “book building.” Book building involves establishing a syndicate to ask institutional buyers how many shares, and at what price, they will purchase. This establishes a range of prices and enables the offering to be more accurately priced, in contrast to a fixed price offer in which the share price is determined prior to the actual offering. The United Kingdom has used private sales for entities where it would not have been appropriate or cost effective to use public offerings. In addition, the United Kingdom used management/employee buyouts to increase employee share ownership. In private sales, the government’s financial advisors typically conduct a discounted cash flow analysis to establish an internal benchmark for acceptable bids, which is not disclosed. An official told us that the government may accept a price that is below the benchmark because the government’s main objective is to increase economic efficiency through privatization, and maximizing the return to the taxpayer is secondary. The final price is determined by a competitive bidding process, in which the highest bid is usually accepted. The government encourages management/employee buyouts, and price preferences have been offered to management to assist with buy-out costs. Both the National Freight Corporation and Vickers Ship Building and Engineering were sold to their former employees. In addition, provisions are made to encourage employee participation in share offerings. For example, the government may issue free shares or offer discounts to employees. Shares were purchased by 99 percent of British Gas employees and 96 percent of British Telecom employees. According to a government official, New Zealand has conducted its divestitures mainly through private sales, rather than through public stock offerings, because such sales are less expensive to administer, require fewer warranties and indemnities, result in a maximized return and minimize the risk of over- or under-pricing. In order to maximize returns, New Zealand sold entities to the highest bidder and was willing to sell to foreign owners. Management/employee buyouts are permitted, but only as part of the competitive bidding process. We were told that buyout bids are rarely the highest bids and are therefore usually unsuccessful. In Canada, the government uses both public offerings and private sales, depending on the size and type of entity being sold. The government uses underwriters for public offerings and large offerings have been completed in several stages. According to a privatization consultant, the Canadian government always tries to pay attention to employee interests because if this is not done, employee concerns can potentially derail the sale. We were told that management/employee buyouts had not been used at the time of our review. Mexico has primarily used private sales for its privatizations because it has not yet had the capital markets to support public offerings. The sales are conducted through a competitive bidding process with a sealed bid. Since the price is the most important consideration in the assessment, the highest bid generally wins. France generally uses public offerings to sell public enterprises, but it has also used negotiated sales (private sales on an auction basis). Financial advisors provide the Privatization Commission with a range of expected prices, from which the Commission determines a minimum acceptable value. The government then uses book building to gauge the market, facilitate placement of the offering, and reduce market risk. The sale price the government will accept cannot be below the minimum value. In France, the government is required to include employees in public offerings. A 10-percent quota of the shares sold on the open market was reserved for employees of the privatizing entity. Price rebates, up to a maximum of 20 percent, could be granted. However, if the price rebate exceeded 5 percent, the employees had to retain the shares for a specified period, which was generally 2 years. Public entities were often restructured to improve their salability or to engender competition. Financial restructuring might be needed to mitigate the entity’s debt and/or existing liabilities, such as pension obligations and environmental liabilities. Organizational restructuring might also be necessary to break up a monopoly and introduce competition. Determining both the current and future status of an entity’s existing obligations was therefore often considered a necessary step in the valuation process. For these five countries, such obligations included underfunded pension commitments, post-retirement health benefits, environmental cleanup costs, and debt. Governments decided whether to retain responsibility for the remaining liabilities or to sell them with the entity. The market price can be expected to increase for an entity sold with fewer liabilities, particularly liabilities with uncertain costs. Organizational restructuring may also be necessary. If the entity is a monopoly, it may need to be broken up prior to the sale or regulations may need to be put in place to protect the consumer. Whether or not an entity is sold as a monopoly may also affect the price the government receives from the sale. Governments determine, either on a case-by-case basis or as an overall policy, how an entity to be privatized will be structured for sale. The market price of an entity is reduced by the liabilities that come with it; the price may be reduced further by the risk premium associated with any uncertain liabilities. All governments in this study retained some amount of debt associated with entities to be sold, and generally paid the balance on under-funded or unfunded employee obligations. All of the governments used public resources to restructure entities in an attempt to make them viable competitive firms. Officials in all of the countries, however, stated that the government does not generally put a significant amount of new investment in an entity prior to sale and many stated that this was because the private sector is believed to be better able to make investment decisions. The United Kingdom has undertaken substantial restructuring of debt and liabilities to make the entities economically attractive to investors. According to an official in the United Kingdom, the government may retain a portion of the entity’s debt. In addition, the government ensures that pension programs are properly funded prior to the sale of the entity. For example, prior to the sale of the National Freight Company, the government paid 47 million pounds into the pension fund; the government also retained 1,250 million pounds of British Telecom’s underfunded pension liability. New Zealand generally restructures entities when they are converted into a corporate form (that is, into a state-owned enterprise). Since the government usually corporatizes before selling an entity, it addresses issues pertaining to outstanding obligations and liabilities during the corporatization process rather than during sale preparation. The government decides whether to retain or transfer these obligations and liabilities to the newly created SOE on a case-by-case basis. According to a privatization expert, pensions in New Zealand are not underfunded, but the personnel and liability issues that remain, such as the transfer of the pension plans to the private sector, are sorted out during the corporatization process. We were told that in France public enterprises are under commercial law and public enterprise employees do not generally have civil service status; thus, there are few changes for the employees as the result of a sale. A government official told us that France, with a few exceptions, only sells entities that are in good financial condition and that most of the enterprises that have undergone privatization have not required major financial restructuring. The Canadian government tries to ensure that the entity to be sold is commercially viable. In preparing Crown corporations for sale, the government usually retained some of the debt and other liabilities, but this varied depending on the entity being sold. The goal of the government is to reduce liabilities to the point where the corporation is able to operate viably in the private sector. We were also told that the Canadian government provides generous severance payments and that their cost can be significant. In Mexico, the government identifies the entity’s unfunded obligations prior to the sale of the entity. We were told, however, that this does not mean the government will necessarily retain the liability or pay off the unfunded costs. The government quantifies the existing liabilities so that the bidder knows the status of the entity that is for sale. In some privatizations, the government may retain all of the debt or liabilities, while in other instances the government clearly identifies the liabilities and/or debt and the prospective owner agrees to pay the costs. In some cases, the entity would not have been saleable if the government had not retained its large debt. An official in Mexico told us that any employee layoffs usually occurred after a firm was privatized. An employee who is dismissed is entitled to a minimum of 3 months pay plus a seniority bonus, which is equal to as much as 20 days per year of service. When a firm with staff having above average seniority is sold, the potential for the seniority bonus is disclosed in the sales transaction. Existing and future environmental liabilities can also represent a large cost. We were told that in Mexico the government performs an audit to document existing environmental liabilities and provides the written report to the bidders. In some instances, the government will assume responsibility for the clean-up, and in other cases the bidder will buy the entity with certain liabilities intact. In either case, uncertainty about these liabilities has been reduced. Officials we interviewed said that the presence or absence of competition is very important in determining how and what to privatize. Some of the governments that sold monopolies either (1) tried to create competition by eliminating the monopoly statutes that had prevented competitors from entering the market or (2) broke up monopolies, thereby injecting competition. Governments in France and New Zealand will not privatize monopolies. New Zealand will not sell natural monopolies because they are economically inefficient. France does not sell monopolies because the government believes that certain public functions, such as the provision of public utilities, require a monopolistic structure to ensure equal access to high quality service. As a result, France has not privatized certain entities that other countries have privatized. The United Kingdom has sold natural monopolies, but in each case it established a regulatory body with the responsibility of preventing the abuse of monopoly powers. An official stated that during the early stages of the United Kingdom’s privatization program, the government sold natural monopolies along with the monopolies’ related business units. For example, the gas industry was sold as one company, meaning that the pipe network (the natural monopoly) was sold with its business units, such as those that bought gas or those that distributed gas. Thus, the natural monopoly was in a strong position to abuse its power because of the vertical and horizontal integration. The United Kingdom has learned from this experience and now tries not to privatize natural monopolies with their related business units. The government tries to break up the monopoly so that the portions that could be competitive are separated, thus leaving only the natural monopoly to be regulated. A privatization expert in Mexico stated that the government has learned important lessons from the sale of intact monopolies. The telephone company in Mexico (TELMEX) was sold intact. The government received significant revenues from the sale of the monopoly, and therefore, initially considered the sale a success. However, the taxpayer benefitted little in terms of prices or service. The government’s policy is now to attempt to update the regulations and the structure of the industry or entity prior to its sale. We were told that the New Zealand government has a policy not to sell natural monopolies. It will, however, sell statutory monopolies once the legislation that created the monopoly has been removed and the entity has been restructured to allow competition in the industry. We were also told that even though entities with monopoly rights generate a higher price, New Zealand has decided not to sell monopolies because they would not enhance economic efficiency if transferred to the private sector. For example, New Zealand Telecom was de-monopolized and new firms were encouraged to enter the market prior to its privatization. Also, an official told us that New Zealand sold its railroad, but only after repealing the law requiring rail freight transport for distances exceeding 100 miles. This, in effect, permitted other forms of transport to compete. In contrast, the air traffic control system, which is a natural monopoly, has been corporatized, but there are currently no discussions of privatizing it. France has not included monopolies in its privatization program. According to Treasury documents, this decision stems from the premise that certain activities must be strictly regulated if overall effectiveness is to be reconciled with consumer protection. For this reason, France’s major public services, including electricity, gas, telecommunications, and rail transport, have not been privatized. The monopoly statutes of some of these services, however, are due to expire in the near future. A Treasury official in France told us that the government is considering the privatization of some of these entities, including France Telecom, whose monopoly statute will expire in 1998. Competition is a factor in Canada in determining whether or not to privatize, but the government has in some cases privatized where no competition exists. In these cases, the government established a regulatory regime prior to the sale. This is discussed in greater detail in the next section. Many of the countries we spoke with continued to regulate their former monopolies, even after breaking them up. These governments expressed the view that some degree of continued control of rates and services in what were previously public functions was necessary to protect the interests of the consumer and ensure economic efficiency. As discussed above, the United Kingdom establishes a regulatory body with responsibility for regulating natural monopolies and promoting competition. For example, price formulas are used that in most cases limit the annual price increases to no more—and usually less—than the rate of inflation. In addition, competition is encouraged by breaking out the potentially competitive segments of the monopoly and restricting their activities, thus allowing other firms to enter into the market. The New Zealand government has used “Kiwi Shares” to protect consumers. According to a government official, a Kiwi Share is a single share of the privatized entity that is held by the government and provides the government with regulatory authority to enforce conditions of the sale. However, the Kiwi Share has no voting or income distribution rights. For example, in the privatization of New Zealand Telecom, a Kiwi Share was used to protect rural telephone service. It was feared that once New Zealand Telecom was sold, rural service would either decrease or its price would increase significantly. The Kiwi Share limited future price increases for rural service to no more than the annual rate of overall price inflation. The sale of Teleglobe Canada, a Crown corporation with a monopoly on international communications, is an example of Canada’s sale of an intact monopoly. The corporation was not subject to regulation prior to its sale, and the privatization process was long and complex, particularly because of unresolved issues, including questions relating to regulatory policy. The government knew that higher rates would raise the value to the bidder and the government. Higher rates, however, were unpopular with the Canadian public. The final regulatory agreements allowed Teleglobe to retain its monopoly status for at least 5 years, but required it to reduce its rates. Most of the governments in this study use any cash proceeds that result from privatization to reduce debt and interest costs and do not permit proceeds to be used to offset ongoing spending. However, according to various government officials, proceeds have sometimes been used to finance ongoing spending. Many of the governments display proceeds from privatization both within and distinct from their government’s annual budget deficit or surplus numbers. How a government incorporates privatization proceeds into its budget has important implications for deficit reduction. If the proceeds are included within the budget, the government’s deficit for that year will be reduced by the nonrecurring privatization proceeds. In concept, this could lessen the pressure to identify spending reductions in ongoing operations. Decreasing the spending levels of ongoing operations can result in long-term budgetary savings, while the proceeds from privatization provide only a one-time offset to the deficit. Although governments talk of using privatization proceeds to reduce debt, technically, a country cannot actually begin to reduce its nominal government debt unless it is in fiscal balance or has a budget surplus. Nevertheless, when a government sells assets, the sale proceeds will reduce the country’s borrowing requirements from what they would have been and, as a result, the debt servicing costs will also be reduced. Mexico has earmarked most of the proceeds from privatization for debt reduction. Government officials in Mexico told us that they strongly believe that nonrecurring revenues from privatization should not be used for ongoing operations. The government created a Contingency Fund in which the revenues from privatization have been set aside as reserves to deal with external shocks or to cancel public debt. Government officials told us that most of the proceeds from privatization have been placed in this fund and used to retire government debt. According to a budget official, the government presents its budget and deficit numbers with and without the proceeds from privatization in order to discourage using the proceeds for ongoing operations. New Zealand has also used proceeds from privatization primarily for debt reduction. The government displayed the sale proceeds on-budget but drew a line to signify that they were not included in what the government called the “adjusted deficit.” The proceeds were used to reduce the government’s borrowing requirements when the government was in deficit, thus reducing debt servicing costs; when the government reached budget surplus, the proceeds were used to buy down debt. The proceeds were not used to offset expenditures and the deficit reduction that results from privatization appears in addition to planned spending reductions. France has generally used the proceeds from privatization to reduce its borrowing requirements. According to documents provided by the Treasury, between 1986 and 1988, about two-thirds of any proceeds from privatization were earmarked for debt reduction. More recently, however, these same documents state that the proceeds have been used for general budget appropriations and to sustain the economy through a period of reduced growth. A substantial portion of the proceeds from the current privatizations is being used to retain programs designed to cushion the impact of a recession by assisting the unemployed in finding new jobs. In Canada, the impact of privatization on the reported budget deficit is the difference between the realized proceeds of the sale and the recorded value. For example, according to a government official, if an entity is recorded in the public accounts at Can$1 billion and the proceeds from the sale equal Can$1 billion, the sale will have no on-budget effect on the reported deficit. If the proceeds exceed the recorded value, for example, if they equal Can$1.2 billion, the reported deficit will be reduced by the amount that is greater than the recorded value, that is, by Can$0.2 billion. This Can$0.2 billion must, by law, be deposited in the Debt Servicing and Reduction account. The funds in this account are to be applied to the annual interest costs on government borrowing and ultimately to buying down debt. The proceeds are used to reduce the government’s borrowing requirements. In the United Kingdom, privatization proceeds are considered negative expenditures. The proceeds are not earmarked for specific purposes, however, but are generally deposited in the Consolidated Fund along with other government receipts. Government officials stated that the proceeds are used to decrease the public sector borrowing requirement, which in the United Kingdom is defined as all receipts and expenditures at all levels of government, including borrowing by nationalized industries, debt interest, and privatization proceeds. The Treasury does not use the proceeds to make room for additional expenditures in programs that have a cash limit, but one official stated that he believed that the privatization proceeds have weakened the downward pressure on expenditures that Treasury tries to apply. The ways in which governments we studied implemented privatization programs and the lessons those policymakers learned could help the United States in evaluating and, ultimately, carrying out divestitures currently under consideration. As the debate over such proposals suggests, there are issues in the United States regarding how best to evaluate a proposal to sell, who should manage the valuation and sale processes, how to estimate future proceeds, how the sale should be structured, and how the proceeds should be treated in the budget. The experiences in the governments we examined suggest that often no single answer is widely applicable to all governments in all situations. Nonetheless, the information these governments provided may help the United States smooth the transfer of viable operations from the public to the private sector. Further, some specific elements of other governments’ practices may have particular relevance to issues the United States faces today. All the countries we studied kept management or oversight of the privatization process in their Treasury or central financial ministry. A government representative in one country told us that doing so allowed the government to build upon early divestiture experiences. The representative also noted that because the management of the entity to be sold had different and sometimes conflicting interests than those of the central financial agency, the latter maintained responsibility for most aspects of sale structuring. External financial advice was also necessary. The governments we consulted rely heavily on private sector expertise in estimating market values and structuring sales. These advisors generally reported to the finance ministry charged with managing the divestiture, not to the entity being sold. The U.S. government has embarked on only two large divestitures in the recent past and, therefore, has had little reason to create a centralized process for managing such sales. The Department of Transportation was responsible for managing the sale of Conrail and the Department of Energy oversaw the Great Plains Coal Gasification sale. More recently, the Department of the Treasury has played an active role in concurring with key decisions in the sale of the United States Enrichment Corporation (USEC); however, as we observed in our report on these preparations, the privatization plan USEC prepared clearly states that USEC and its board of directors will play the lead role in determining how and when the key decisions will be made. Although the U.S. government cannot privatize as many operations as other countries such as the United Kingdom and New Zealand have, more such proposals are under serious consideration in the United States. This suggests that considering a consistent management process would prove beneficial. Assigning responsibility for all divestitures to a single agency could take advantage of the “learning curve,” as these other governments did, by applying expertise gained from earlier privatizations to subsequent sales. Doing so unambiguously could also help clarify who represents the government in the sales transactions and remove any appearance of conflict. This continuity of experience could also provide the expertise required to identify situations in which the use of special techniques such as clawbacks or warrants on windfall profits would be appropriate. In our report on USEC’s privatization plan, we stated that the Treasury should have the lead role since Treasury officials, unlike USEC’s managers and its board, will not be directly affected by the privatization and will therefore be better able to protect taxpayer interests. The experience of other governments also suggests that the U.S. government might usefully consider assigning the lead role in all divestiture preparations and management to a central financial agency, such as the Treasury Department. According to officials, the stated policies of most of the governments we studied do not permit the proceeds from asset sales to offset ongoing spending; however, such offsets have apparently occurred from time to time. These policies exist for the same reason that current U.S. budget rules do not permit proceeds from asset sales to be scored: using one-time revenues to finance new spending allows the appearance of balance in the short run while creating greater imbalance in the long run. A major issue under current U.S. budget law pertains to the fact that congressional committees that have jurisdiction over entities being privatized are not permitted to “score” the proceeds from asset sales for budget enforcement purposes; this means that they cannot use the proceeds to offset additional expenditures within their budget allocation.The privatization proceeds reduce the government’s current borrowing requirements from what they might otherwise have been, but do not, however, “count” towards the deficit reduction goals specified under the Budget Enforcement Act of 1990, as amended. These scoring rules may result in a non-neutral budget situation. While the proceeds are not scored, any outlays—such as those necessary to fund underwriters for the sale—or revenue losses associated with the sale of a revenue-generating entity are scored. Therefore, concerns that U.S. budget rules carry disincentives to privatize have merit. Because the costs of divestiture—including the loss of the entity’s stream of future net revenues—are counted while sale proceeds are not, in this budgetary environment, it is difficult to sell money-making operations unless an offsetting change in receipts or mandatory spending can be found. Current budget rules favor retaining profit-making operations—the very entities most likely to appeal to potential buyers and least likely to require government subsidies—regardless of the economic or even fiscal arguments for moving these businesses to the private sector. We found that budget rules that prevent the use of one-time proceeds to finance ongoing spending are widely used. However, budget rules should not dominate the divestiture decision; the decision to privatize should be made on other grounds. Therefore, the Congress may wish to consider ways to neutralize the scoring. It would be possible to alter budget rules to permit the use of sale proceeds only to offset any costs associated with implementing the sale plus any loss of net revenues now and in the future. Remaining proceeds could be used to reduce the government’s current borrowing requirements. We are sending copies of this report to the President of the Senate, the Speaker of the House of Representatives, and the Chairmen and Ranking Members of the House and Senate Budget Committees. We are also sending copies to the Director of the Congressional Budget Office, the Secretary of the Treasury, and the Director of the Office of Management and Budget. Copies will be made available to others upon request. This work was performed under the direction of Barbara Bovbjerg, Assistant Director. Other major contributors were Hannah Laufe, Evaluator-in-Charge, and Sheri Powner, Evaluator. Please contact me at (202) 512-9142 if you or your staff have any questions. In 1987, the Department of Transportation (DOT) sold Conrail through a public offering, which resulted in net proceeds to the government of $1.575 billion. The government’s goals for privatizing Conrail included providing for the long-term viability and continuation of rail service in the Northeast and Midwest, protecting the public interest in a sound rail transportation system, and, to the extent not inconsistent with these purposes, securing the maximum proceeds possible from the sale. The government met its primary goals for the sale of Conrail, in that it ensured the continuation of viable rail service, but only after spending about $8 billion creating, subsidizing, and preparing Conrail for sale. Conrail was created in 1976 as a for-profit government corporation resulting from the consolidation of seven bankrupt railroads. The government was given an 85-percent common stock interest in the company. The other 15 percent of Conrail’s common stock was held through an employee stock ownership plan. The Congress, however, spent over $7 billion on Conrail-related activities through 1988. This included funds to purchase the properties of the bankrupt railroads, operating subsidies and capital improvements, and employee buyouts. By the end of 1980, however, Conrail had accumulated substantial operating losses. In 1980, the Congress passed the Staggers Rail Act which authorized substantial deregulation of rail transportation. In 1981, the Congress passed the Northeast Rail Service Act (NERSA) to help Conrail reach profitability. NERSA enabled Conrail to expedite abandonment of unprofitable lines and transfer commuter services to other operators, established a government-funded severance program, provided funding for certain supplemental unemployment benefit payments, and exempted Conrail from state taxes. After these measures were enacted, Conrail began reporting operating profits. NERSA also authorized DOT to hire an investment advisor and, if Conrail was found to be operating profitably, to sell Conrail. DOT favored a sale to a single buyer whose financial strength would ensure Conrail’s future in the private sector. However, Conrail management and some Members of the Congress favored a public stock offering. On October 21, 1986, the Congress provided authority for the public offering through its passage of the Conrail Privatization Act, which required DOT to select six investment banks to manage the sale of the government’s interests in Conrail. On March 26, 1987, DOT made a public offering of Conrail’s stock. The Great Plains Coal Gasification Project was designed to produce pipeline quality synthetic natural gas from coal. In 1982, the Department of Energy (DOE) awarded a loan guarantee to a partnership of five energy companies for the plant’s construction and start-up. In 1985, the partnership defaulted on its DOE guaranteed $1.5 billion loan from the Federal Financing Bank, and DOE acquired control of and then title to the project. Operation of the plant continued under the original plant operator for the next 3 years. During this time, the plant was profitable, and project revenues exceeded expenses by about $110.3 million. In 1986, DOE announced it would sell the Great Plains project and subsequently hired Shearson Lehman Hutton, Inc., to assist it in doing so. In 1988, DOE selected the Basin Electric Power Cooperative as the preferred purchaser for the Great Plains project. Basin was one of nine prospective purchasers that submitted firm offers. According to DOE, Basin provided the highest offer and the strongest commitment to the project. DOE received $85 million at the sale closing and a commitment for DOE to share in future revenues from plant operations. Sale of NPRs and Oil Shale Reserves (GAO/RCED-96-28R, October 17, 1995). Federal Electric Power: Operating and Financial Status of DOE’s Power Marketing Administrations (GAO/RCED/AIMD-96-9FS, October 13, 1995). Uranium Enrichment: Process to Privatize the U.S. Enrichment Corporation Needs to Be Strengthened (GAO/RCED-95-245, September 14, 1995). Terminating Federal Helium Refining (GAO/RCED-95-252R, August 28, 1995). Tennessee Valley Authority: Financial Problems Raise Questions About Long-term Viability (GAO/AIMD/RCED-95-134, August 17, 1995). Sale of NPR-1 (GAO/RCED-95-255R, August 1, 1995). Naval Petroleum Reserves (GAO/RCED-95-141R, March 17, 1995). Uranium Enrichment: Observations on the Privatization of the United States Enrichment Corporation (GAO/T-RCED-95-116, February 24, 1995). Letter to Honorable William V. Roth, Jr. Discussing Privatization Experiences in Other Countries (B-260308, February 6, 1995). Naval Petroleum Reserve: Opportunities Exist to Enhance its Profitability (GAO/RCED-95-65, January 12, 1995). Mineral Resources: H.R. 3967 - A Bill to Change How Federal Needs for Refined Helium Are Met (GAO/T-RCED-94-183, April 19, 1994). Federal Electric Power: Views on the Sale of Alaska Power Administration Hydropower Assets (GAO/RCED-90-93, February 22, 1990). Lessons Learned About Evaluation of Federal Asset Sale Proposals (GAO/T-RCED-89-70, September 26, 1989). Synthetic Fuels: An Overview of DOE’s Ownership and Divestiture of the Great Plains Project (GAO/RCED-89-153, July 14, 1989). Federal Assets: Information on Completed and Proposed Sales (GAO/RCED-88-214FS, September 21, 1988). Conrail Sale: DOT’s Selection of Investment Banks to Underwrite the Sale of Conrail (GAO/RCED-87-88, February 17, 1987). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the divestiture experiences of other nations, focusing on the: (1) privatization process; (2) valuation and preparation of assets for sale; and (3) use and display of sale proceeds for budgetary purposes. GAO found that: (1) in the nations studied, privatization goals influenced how and what entities would be offered for sale; (2) the nations studied used privatization mainly to increase economic efficiency and reduce the size of government; (3) a central agency is responsible for overseeing the privatization process in each of the nations studied; (4) the nations generally privatized entities already in a corporate form; (5) some nations used clawbacks to require buyers to return a share of profits to the government; (6) although the use of clawbacks helps protect taxpayers against undervaluation, they decrease sale prices and may constrain entities' commercial behavior; (7) although the nations used various valuation techniques, all governments hired financial advisors to assist in the valuation process; (8) most of the nations studied attempted to remove liabilities from entities being privatized by restructuring debt, paying unfunded employee obligations, or otherwise removing risks that would reduce the entity's sale price; and (9) most of the governments used any proceeds resulting from privatization to reduce debt and interest costs rather than to offset ongoing spending.
The FEHBP is the largest employer-sponsored health insurance program in the country. Through it, about 8 million federal employees, retirees, and their dependents received health coverage—including for prescription drugs—in 2008. Coverage is provided under competing plans offered by multiple private health insurers under contract with OPM, which administers the program, subject to applicable requirements. In 2009, 269 health plan options were offered by participating insurers, 10 of which were offered nationally while the remaining health plan options were offered in certain geographic regions. According to OPM, plans must cover all medically necessary prescription drugs approved by the Food and Drug Administration (FDA), but plans may maintain formularies that encourage the use of certain drugs over others. Enrollees may obtain prescriptions from retail pharmacies that contract with the plans or from mail-order pharmacies offered by the plans. In 2005, FEHBP prescription drug spending was an estimated $8.3 billion. Medicare—the federal health insurance program that serves about 45 million elderly and disabled individuals—offers an outpatient prescription drug benefit known as Medicare Part D. This benefit was established by the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) beginning January 1, 2006. As of February 2009, Part D provided federally subsidized prescription drug coverage for nearly 27 million beneficiaries. The Centers for Medicare & Medicaid Services (CMS), part of the Department of Health and Human Services (HHS), manages and oversees Part D. Medicare beneficiaries may choose a Part D plan from multiple competing plans offered nationally or in certain geographic areas by private sponsors, largely commercial insurers, under contract with CMS. Part D plan sponsors offer drug coverage either through stand-alone prescription drug plans for beneficiaries in traditional fee-for-service Medicare or through Medicare managed care plans, known as Medicare Advantage. In 2009, there were over 3,700 prescription drug plans offered. Under Medicare Part D, plans can design their own formularies, but each formulary must include drugs within each therapeutic category and class of covered Part D drugs. Enrollees may obtain prescriptions from retail pharmacies that contract with the plans or from mail-order pharmacies offered by the plans. Medicare Part D spending is estimated to be about $51 billion in 2009. The VA pharmacy benefit is provided to eligible veterans and certain others. As of 2006, about 8 million veterans were enrolled in the VA system. In general, medications must be prescribed by a VA provider, filled at a VA pharmacy, and listed on the VA national drug formulary, which comprises 570 categories of drugs. In addition to the VA national formulary, VA facilities can establish local formularies to cover additional drugs. VA may provide nonformulary drugs in cases of medical necessity. In 2006, VA spent an estimated $3.4 billion on prescription drugs. The DOD pharmacy benefit is provided to TRICARE beneficiaries, including active duty personnel, certain reservists, retired uniformed service members, and dependents. As of 2009, there were about 9.4 million eligible TRICARE beneficiaries. In addition to maintaining a formulary, DOD provides options for obtaining nonformulary drugs. Beneficiaries can obtain prescription drugs through a network of retail pharmacies, nonnetwork retail pharmacies, DOD military treatment facilities, and DOD’s TRICARE Mail-Order Pharmacy. In 2006, DOD spent $6.2 billion on prescription drugs. Medicaid, a joint federal-state program, finances medical services for certain low-income adults and children. In fiscal year 2008, approximately 63 million beneficiaries were enrolled in Medicaid. While some benefits are federally required, outpatient prescription drug coverage is an optional benefit that all states have elected to offer. Drug coverage depends on the manufacturer’s participation in the federal Medicaid drug rebate program, through which manufacturers pay rebates to state Medicaid programs for covered drugs used by Medicaid beneficiaries. Retail pharmacies distribute drugs to Medicaid beneficiaries and then receive reimbursements from states for the acquisition cost of the drug and a dispensing fee. Medicaid outpatient drug spending has decreased since 2006 because Medicare Part D replaced Medicaid as the primary source of drug coverage for low-income beneficiaries with coverage under both programs—referred to as dual eligible beneficiaries. In fiscal year 2008, Medicaid outpatient drug spending was $9.3 billion—including $5.5 billion as the federal share—which was calculated after adjusting for manufacturer rebates to states under the Medicaid drug rebate program. FEHBP uses competition among health plans as the primary measure to control prescription drug spending and other program costs. Under an annual “open season,” enrollees may remain enrolled in the same plan or select another competing plan based on benefits, services, premiums, and other such factors. Thus, plans have the incentive to try to retain or increase their market share by providing the benefits sought by enrollees along with competitive premiums. In turn, the larger a plan’s market share, the more leverage it has for obtaining favorable drug prices on behalf of its enrollees and controlling prescription drug spending. Similar to most private employer-sponsored or individually purchased health plans, most FEHBP plans contract with pharmacy benefit managers (PBMs) to help them administer the prescription drug benefit and control drug spending. In a 2003 report reviewing the use of PBMs by three plans representing about 55 percent of total FEHBP enrollment, we found that the PBMs used three key approaches to achieve savings for the health plans: negotiating rebates with drug manufacturers and passing some of the savings to the plans; obtaining drug price discounts from retail pharmacies and dispensing drugs at lower costs through mail-order pharmacies operated by the PBMs; and using other intervention techniques that reduce utilization of certain drugs or substitute other, less costly drugs. For example, under generic substitution PBMs substituted less expensive, chemically equivalent generic drugs for brand-name drugs; under therapeutic interchange PBMs encouraged the substitution of less expensive formulary brand-name drugs for more expensive nonformulary drugs within the same drug class; under prior authorization PBMs required enrollees to receive approval from the plan or PBM before dispensing certain drugs that are high cost or meet other criteria; and under drug utilization review PBMs examined prescriptions at the time of purchase or retrospectively to assess safety considerations and compliance with clinical guidelines, including appropriate quantity and dosage. The PBMs were compensated by retaining some of the negotiated savings. The PBMs also collected fees from the plans for administrative and clinical services, kept a portion of the payments from FEHBP plans for mail-order drugs in excess of the prices they paid manufacturers to acquire the drugs, and in some cases retained a share of the rebates that PBMs negotiated with drug manufacturers. While OPM does not play a role in negotiating prescription drug prices or discounts, it does attempt to limit prescription drug spending through its leverage with participating health plans in annual premium and benefit negotiations. Each year, OPM negotiates benefit and rate proposals with participating plans and announces key policy goals for the program, including those relating to spending control. For example, in preparation for benefit and rate negotiations for the 2007 plan year, OPM encouraged proposals from plans to continue to explore the appropriate substitution for higher cost drugs with lower cost therapeutic alternatives, such as generic drugs, and the use of tiered formularies or prescription drug lists. OPM also sought proposals from plans to pursue the advantages of specialty pharmacy programs aimed at reducing the high costs of infused and intravenously administered drugs. In preparation for 2010 benefit and rate negotiations, OPM reiterated its desire for proposals from plansto substitute lower cost for higher cost therapeutically equivalent drug s, adding emphasis to using evidence-based health outcome measures. Medicare Part D uses a competitive model similar to FEHBP, while other federal programs use other methods, such as statutorily mandated prices or direct negotiations with drug suppliers. Medicare Part D follows a model similar to the FEHBP by relying on competing prescription drug plans to control prescription drug spending. As with the FEHBP, during an annual open season Part D enrollees may remain enrolled in the same plan or select from among other competing plans based on benefit design, premiums, and other plan features. To attract enrollees, plans have the incentive to offer benefits that will meet beneficiaries’ prescription drug needs at competitive premiums. The larger a plan’s market share, the more leverage it has for obtaining favorable drug prices on behalf of its enrollees and controlling prescription drug spending. As a result, Part D plans vary in their monthly premiums, the annual deductibles, and cost sharing for drugs. Plans also differ in the drugs they cover on their formulary and the pharmacies they use. Part D uses competing sponsors to generate prescription drug savings for beneficiaries, in part through their ability to negotiate prices with drug manufacturers and pharmacies. To generate these savings, sponsors often contract with PBMs to negotiate rebates with drug manufacturers, discounts with retail pharmacies, and other price concessions on behalf of the sponsor. MMA specifically states that the Secretary of HHS may not interfere with negotiations between sponsors and drug manufacturers and pharmacies. Even though CMS is not involved in price negotiations, it attempts to determine whether beneficiaries are receiving the benefit of negotiated drug prices and price concessions when it calculates the final plan payments. Sponsors must report the price concession amounts to CMS and pass price concessions onto beneficiaries and the program through lower cost sharing, lower drug prices, or lower premiums. Similar to OPM, CMS also negotiates plan design with participating plans and announces key policy goals for the program, including those relating to spending control. For example, in preparation for 2010 benefit and rate negotiations, CMS noted that one of its goals is to establish a more transparent process so that beneficiaries will be able to better predict their out-of-pocket costs. Part D sponsors or their PBMs also use other methods to help contain drug spending similar to FEHBP plans. For example, most plans assign covered drugs to distinct tiers, each of which carries a different level of cost sharing. A plan may establish separate tiers for generic drugs and brand-name drugs—with the generic drug tier requiring a lower level of cost sharing than the brand-name drug tier. Plans may also require utilization management for certain drugs on their formulary. Common utilization management practices include requiring physicians to obtain authorization from the plan prior to prescribing a drug; step therapy, which requires beneficiaries to first try a less costly drug to treat their condition; and imposing quantity limits for dispensed drugs. Additionally, all Part D plans must meet requirements with respect to the extent of their pharmacy networks and the categories of drugs they must cover. Plan formularies generally must cover at least two Part D drugs in each therapeutic category and class, except when there is only one drug in the category or class or when CMS has allowed the plan to cover only one drug. CMS has also designated six categories of drugs of clinical concern for which plans must cover all or substantially all of the drugs. While FEHBP and Medicare Part D use competition between health plans to control prescription drug spending, VA and DOD rely on statutorily mandated prices and discounts and further negotiations with drug suppliers to obtain lower prices for drugs covered on their formularies. VA and DOD have access to a number of prices to consider when purchasing drugs, paying the lowest available. Federal Supply Schedule (FSS) prices. VA’s National Acquisition Center negotiates FSS prices with drug manufacturers, and these prices are available to all direct federal purchasers. FSS prices are intended to be no more than the prices manufacturers charge their most-favored nonfederal customers under comparable terms and conditions. Under federal law, drug manufacturers must list their brand-name drugs on the FSS to receive reimbursement for drugs covered by Medicaid. All FSS prices include a fee of 0.5 percent of the price to fund VA’s National Acquisition Center. Blanket purchase agreements and other national contracts. B purchase agreements and other national contracts with drug manufacturers allow VA and DOD—either separately or jointly—to negotiate prices below FSS prices. The lower prices may depend on the volume of specific drugs being purchased by particular facilities, such as VA or military hospitals, or on being ass OD’s respective national formularies. D igned preferred status on VA’s and In a few cases, individual VA and DOD medical centers have obtained lower prices through local agreements with suppliers than they could through the national contracts, FSS prices, or federal ceiling prices. In addition, VA’s and DOD’s use of formularies, pharmacies, and prime vendors can further affect drug prices and help control drug spending. Both VA and DOD use their own national, standard formulary to obtain more competitive prices from manufacturers that have their drugs listed on the formulary. VA and DOD formularies also encourage the substitution of lower cost drugs determined to be as or more effective than hig drugs. VA and DOD use prime vendors, which are preferred drug distributors, to purchase drugs from manufacturers and deliver the drugs to VA or DOD facilities. VA and DOD receive discounts from their prime vendors that also reduce the prices that they pay for drugs. For DOD, the discounts vary among prime vendors and the areas they serve. As of June 2004, VA’s prime vendor discount was 5 percent, while DOD’s discounts averaged about 2.9 percent within the United States. Additionally, si to FEHBP and Medicare Part D, DOD uses utilization management methods to limit drug spending including prior authorization, dispensin limitations, and higher cost sharing for nonformulary drugs and drugs dispensed at retail pharmacies. Unlike VA and DOD, Medicaid programs do not negotiate drug prices with il manufacturers to control prescription drug spending, but reimburse reta pharmacies for drugs dispensed to beneficiaries at set prices. CMS sets aggregate payment limits—known as the federal upper limit (FUL)—for certain outpatient multiple-source prescription drugs. CMS also provides guidelines regarding drug payment. States are to pay pharmacies the lower of the state’s estimate of the drug’s acquisition cost to the pharmacy, pl a dispensing fee, or the pharmacy’s usual and customary charge to the general public; for certain d costs may apply if lower. rugs the FUL or the state maximum allowable In addition to these retail pharmacy reimbursements, Medicaid programs also control prescription drug spending through the Medicaid drug rebate program. Under the drug rebate program, drug manufacturers are required to provide quarterly rebates for covered outpatient prescription drugs purchased by state Medicaid programs. Under the rebate program, states take advantage of the prices manufacturers receive for drugs in the commercial market that reflect the results of negotiations by private payers such as discounts and rebates. For brand-name drugs, the rebates are based on two price benchmarks per drug that manufacturers report to CMS: best price and average manufacturer price (AMP). The relationship between best price and AMP determines the unit rebate amount and thus the overall size of the rebate that states receive. The basic unit rebate amount is the greater of two values: the difference between best price and AMP or 15.1 percent of AMP. If the brand-name drug’s AMP rises faster than inflation as measured by the change in the consumer price index, the manufacturer is required to provide an additional rebate to the state Medicaid program. In addition to brand-name drugs, states also receive rebates for generic drugs. For generic drugs, the basic unit rebate amount is 11 percent of the AMP. A state’s rebate for a drug is the product of the unit rebate amount plus any applicable additional rebate amount and the number of units of the drug paid for by the state’s Medicaid program. In addition to the rebates mandated under the drug rebate program, states can also negotiate additional rebates with manufacturers. Like FEHBP and Medicare Part D participating plans, Medicaid programs also use other utilization management methods to control prescription drug spending including prior authorization and utilization review programs, dispensing limitations, and cost-sharing requirements. Mr. Chairman, this concludes my prepared remarks. I would be happy to answer any questions that you or other members of the Subcommittee may have. For future contacts regarding this testimony, please contact John E. Dicken at (202) 512-7114 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Randy DiRosa, Assistant Director; Rashmi Agarwal; William A. Crafton; Martha Kelly; and Timothy Walker made key contributions to this statement. Federal Employees Health Benefits Program: Enrollee Cost Sharing for Selected Specialty Prescription Drugs. GAO-09-517R. Washington, D.C.: April 30, 2009. Medicare Part D Prescription Drug Coverage: Federal Oversight of Reported Price Concessions Data. GAO-08-1074R. Washington, D.C.: September 30, 2008. DOD Pharmacy Program: Continued Efforts Needed to Reduce Growth in Spending at Retail Pharmacies. GAO-08-327. Washington, D.C.: April 4, 2008. DOD Pharmacy Benefits Program: Reduced Pharmacy Costs Resulting from the Uniform Formulary and Manufacturer Rebates. GAO-08-172R. Washington, D.C.: October 31, 2007. Military Health Care: TRICARE Cost-Sharing Proposals Would Help Offset Increasing Health Care Spending, but Projected Savings Are Likely Overestimated. GAO-07-647. Washington, D.C.: May 31, 2007. Federal Employees Health Benefits Program: Premiums Continue to Rise, but Rate of Growth Has Recently Slowed. GAO-07-873T. Washington, D.C.: May 18, 2007. Prescription Drugs: Oversight of Drug Pricing in Federal Programs. GAO-07-481T. Washington, D.C.: February 9, 2007. Prescription Drugs: An Overview of Approaches to Negotiate Drug Prices Used by Other Countries and U.S. Private Payers and Federal Programs. GAO-07-358T. Washington, D.C.: January 11, 2007. Medicaid Outpatient Prescription Drugs: Estimated 2007 Federal Upper Limits for Reimbursement Compared with Retail Pharmacy Acquisition Costs. GAO-07-239R. Washington, D.C.: December 22, 2006. Federal Employees Health Benefits Program: Premium Growth Has Recently Slowed, and Varies among Participating Plans. GAO-07-141. Washington, D.C.: December 22, 2006. Medicaid: States’ Payments for Outpatient Prescription Drugs. GAO-06-69R. Washington, D.C.: October 31, 2005.
Millions of individuals receive prescription drugs through federal programs. The increasing cost of prescription drugs has put pressure to control drug spending on federal programs such as the Federal Employees Health Benefits Program (FEHBP), Medicare Part D, the Department of Veterans Affairs (VA), the Department of Defense (DOD), and Medicaid. Prescription drug spending within the FEHBP in particular, which provides health and drug coverage to about 8 million federal employees, retirees, and their dependents, has been a significant contributor to FEHBP cost and premium growth. The Office of Personnel Management (OPM), which administers the FEHBP, predicted that prescription drugs would continue to be a primary driver of program costs in 2009. GAO was asked to describe approaches used by the FEHBP to control prescription drug spending and summarize approaches used by other federal programs. This testimony is based on prior GAO work, including Prescription Drugs: Oversight of Drug Pricing in Federal Programs (GAO-07-481T) and Prescription Drugs: An Overview of Approaches to Negotiate Drug Prices Used by Other Countries and U.S. Private Payers and Federal Programs (GAO-07-358T) and selected updates from relevant literature on drug spending controls prepared by other congressional and federal agencies. FEHBP uses competition among health plans to control prescription drug spending, giving plans an incentive to rein in costs and leverage their market share to obtain favorable drug prices. Most FEHBP plans contract with pharmacy benefit managers (PBMs) to help administer the prescription drug benefit. In a 2003 report, GAO found that the PBMs reduced drug spending by: negotiating rebates with drug manufacturers and passing some of the savings to the plans; obtaining drug price discounts from retail pharmacies and dispensing drugs at lower costs through mail-order pharmacies operated by the PBMs; and using other techniques that reduce utilization of certain drugs or substitute other, less costly drugs. While OPM does not negotiate drug prices or discounts for FEHBP, it attempts to limit spending through annual premium and benefit negotiations with plans, including the encouragement of spending controls such as generic substitution. Other federal programs use a range of approaches to control prescription drug spending. (1) Medicare--the federal health insurance program for the elderly and disabled--offers an outpatient prescription drug benefit known as Medicare Part D that uses competition between plan sponsors and their PBMs to limit drug spending, in part through the ability to negotiate prices and price concessions with drug manufacturers and pharmacies. Plans are required to report these negotiated price concessions to the Centers for Medicare & Medicaid Services (CMS), to help CMS determine the extent to which they are passed on to beneficiaries. (2) VA and DOD pharmacy benefit programs for veterans, active duty military personnel, and others may use statutorily mandated discounts as well as negotiations with drug suppliers to limit drug spending. VA and DOD have access to a number of prices to consider when purchasing drugs--including the Federal Supply Schedule prices that VA negotiates with drug manufacturers--paying the lowest of all available prices. (3) The Medicaid program for low-income adults and children is subject to aggregate payment limits and drug payment guidelines set by CMS. Medicaid does not negotiate drug prices with manufacturers, but reimburses retail pharmacies for drugs dispensed to beneficiaries at set prices. An important element of controlling Medicaid drug spending is the Medicaid drug rebate program, under which drug manufacturers are required by law to provide rebates for certain drugs covered by Medicaid. Under the rebate program, states take advantage of prices manufacturers receive for drugs in the commercial market that reflect discounts and rebates negotiated by private payers. In addition, Part D, VA and DOD, and Medicaid use techniques similar to FEHBP to limit drug spending, such as generic substitution, prior authorization, utilization review programs, or cost-sharing requirements.
8(a) ANC contracting represents a small amount of total federal procurement spending. However, dollars obligated to ANC firms through the 8(a) program grew from $265 million in fiscal year 2000 to $1.1 billion in 2004. Overall, during the 5-year period, the government obligated $4.6 billion to ANC firms, of which $2.9 billion, or 63 percent, went through the 8(a) program. During this period, six federal agencies—the departments of Defense, Energy, the Interior, State, and Transportation and NASA—accounted for almost 85 percent of total 8(a) ANC obligations. Obligations for 8(a) sole- source contracts by these agencies to ANC firms increased from about $180 million in fiscal year 2000 to about $876 million in fiscal year 2004. ANCs use the 8(a) program as one of many tools to generate revenue with the goal of benefiting their shareholders. Some ANCs are heavily reliant on the 8(a) program for revenues, while others approach the program as one of many revenue-generating opportunities, such as investments in stocks or real estate. ANCs are using the congressionally authorized advantages afforded to them, such as ownership of multiple 8(a) subsidiaries, sometimes in diversified lines of business. From fiscal year 1988 to 2005, numbers increased from one 8(a) subsidiary owned by one ANC to 154 subsidiaries owned by 49 ANCs. Figure 1 shows the recent growth in ANCs’ 8(a) subsidiaries. ANCs use their ability to own multiple businesses in the 8(a) program, as allowed by law, in different ways. For example, some ANCs create a second subsidiary in anticipation of winning follow-on work from one of their graduating subsidiaries; wholly own their 8(a) subsidiaries, while others invest in partially- diversify their subsidiaries’ capabilities to increase opportunities to win government contracts in various industries. Our review of 16 large sole-source contracts awarded by 7 agencies found that agency officials view contracting with 8(a) ANC firms as a quick, easy, and legal way to award contracts while at the same time helping their agencies meet small business goals. Memoranda of Understanding (partnership agreements) between SBA and agencies delegate the contract execution function to federal agencies, although SBA remains responsible for implementing the 8(a) program. We found that contracting officials had not always complied with requirements to notify SBA when modifying contracts, such as increasing the scope of work or the dollar value, and to monitor the percentage of the work performed by the 8(a) firms versus their subcontractors. For example: Federal regulation requires that when 8(a) firms subcontract under an 8(a) service contract, they incur at least 50 percent of the personnel costs with their own employees. The purpose of this provision, which limits the amount of work that can be performed by the subcontractor, is to ensure that small businesses do not pass along the benefits of their contracts to their subcontractors. For the 16 files we reviewed, we found almost no evidence that the agencies are effectively monitoring compliance with this requirement. In general, the contracting officers we spoke with were confused about whose responsibility it is. Agencies are also required to notify SBA of all 8(a) contract awards, modifications, and exercised options where the contract execution function has been delegated to the agencies in the partnership agreements. We found that not all contracting officers were doing so. In one case, the Department of Energy contracting officer had broadened the scope of a contract a year after award, adding 10 additional lines of business that almost tripled the value of the contract. These changes were not coordinated with SBA. We reported in 2006 that SBA had not tailored its policies and practices to account for ANCs’ unique status and growth in the 8(a) program, even though officials recognize that ANC firms enter into more complex business relationships than other 8(a) participants. SBA officials told us that they have faced a challenge in overseeing the activity of the 8(a) ANC firms because ANCs’ charter under the Alaska Native Claims Settlement Act is not always consistent with the business development intent of the 8(a) program. The officials noted that the goal of ANCs—economic development for Alaska Natives from a community standpoint—can be in conflict with the primary purpose of the 8(a) program, which is business development for individual small, disadvantaged businesses. SBA’s oversight fell short in that it did not: track the primary business industries in which ANC subsidiaries had 8(a) contracts to ensure that more than one subsidiary of the same ANC was not generating the majority of its revenue under the same primary industry code; consistently determine whether other small businesses were losing contracting opportunities when large sole-source contracts were awarded to 8(a) ANC firms; adhere to a statutory and regulatory requirement to ascertain whether 8(a) ANC firms, when entering the 8(a) program or for each contract award, had, or were likely to obtain, a substantial unfair competitive advantage within an industry; ensure that partnerships between 8(a) ANC firms and large firms were functioning in the way they were intended under the 8(a) program; and maintain information on ANC 8(a) activity. SBA officials from the Alaska district office had reported to headquarters that the makeup of their 8(a) portfolio was challenging and required more contracting knowledge and business savvy than usual because the majority of the firms they oversee are owned by ANCs and tribal entities. The officials commented that these firms tend to pursue complex business relationships and tend to be awarded large and often complex contracts. We found that the district office officials were having difficulty managing their large volume and the unique type of work in their 8(a) portfolio. When we began our review, SBA headquarters officials responsible for overseeing the 8(a) program did not seem aware of the growth in the ANC 8(a) portfolio and had not taken steps to address the increased volume of work in their Alaska office. In 2006, we reported that ANCs were increasingly using the contracting advantages Congress has provided them. Our work showed that procuring agencies’ contracting officers are in need of guidance on how to use these contracts while exercising diligence to ensure that taxpayer dollars are spent effectively. Equally important, we stated, significant improvements were needed in SBA’s oversight of the program. Without stronger oversight, we noted the potential for abuse and unintended consequences. In our April 2006 report, we made 10 recommendations to SBA on actions that can be taken to revise its regulations and policies and to improve practices pertaining to its oversight of ANC 8(a) procurements. Our recommendations and SBA’s June 2007 response are as follows. We recommended that the Administrator of SBA: 1. Ascertain and then clearly articulate in regulation how SBA will comply with existing law to determine whether and when one or more ANC firms are obtaining, or are likely to obtain, a substantial unfair competitive advantage in an industry. SBA response: SBA is exploring possible regulatory changes that would address the issue of better controlling the award of sole- source 8(a) contracts over the competitive threshold dollar limitation to joint ventures between tribally and ANC-owned 8(a) firms and other business concerns. 2. In regulation, specifically address SBA’s role in monitoring ownership of ANC holding companies that manage 8(a) operations to ensure that the companies are wholly owned by the ANC and that any changes in ownership are reported to SBA. SBA response: SBA is building a Business Development Management Information System to electronically manage all aspects of the 8(a) program. According to SBA, this system, scheduled to be completed in fiscal year 2008, will monitor program participants’ continuing eligibility in the 8(a) program and could include an ANC element in the electronic annual review that would monitor the ownership of ANC holding companies that manage 8(a) operations and ensure that any changes in ownership are reported to SBA. 3. Collect information on ANCs’ 8(a) participation as part of required overall 8(a) monitoring, to include tracking the primary revenue generators for 8(a) ANC firms to ensure that multiple subsidiaries under one ANC are not generating their revenue in the same primary industry. SBA response: The planned electronic annual review can collect information on ANCs’ multiple subsidiaries to ensure that they are not generating the majority of their revenues from the same primary industry. Further, to ensure that an ANC-owned firm does not enter the 8(a) program with the same North American Industry Classification System (NAICS) code as another current or former 8(a) firm owned by that ANC, the ANC-owned applicant must certify that it operates in a distinct primary industry and must demonstrate that fact through revenues generated. SBA notes that the planned annual electronic reviews can validate this information. 4. Revisit regulation that requires agencies to notify SBA of all contract modifications and consider establishing thresholds for notification, such as when new NAICS codes are added to the contract or there is a certain percentage increase in the dollar value of the contract. Once notification criteria are determined, provide guidance to the agencies on when to notify SBA of contract modifications and scope changes. SBA response: SBA stated that its revisions to its partnership agreements with federal agencies address this recommendation. However, we note that the revised agreement does not establish thresholds or include new criteria for when agencies should send SBA contract modifications or award documentation. The agreement states that agencies “shall provide a copy of any contract…including basic contracts, orders, modifications, and purchase orders” to SBA. 5. Consistently determine whether other small businesses are losing contracting opportunities when awarding contracts through the 8(a) program to ANC firms. SBA response: SBA stated that it plans to require the contracting agencies to include impact statements in their contract offer letters to SBA. 6. Standardize approval letters for each 8(a) procurement to clearly assign accountability for monitoring of subcontracting and for notifying SBA of contract modifications. SBA response: SBA agreed with the recommendation but did not indicate an action taken or planned. 7. Tailor wording in approval letters to explain the basis for adverse impact determinations. SBA response: SBA agreed with the recommendation but did not indicate an action taken or planned. 8. Clarify memorandums of understanding (known as partnership agreements) with procuring agencies to state that it is the agency contracting officer’s responsibility to monitor compliance with the limitation on subcontracting clause. SBA response: SBA has implemented this recommendation by revising the partnership agreements with the procuring agencies. It added several provisions that delineate the agencies’ responsibilities for oversight, monitoring, and compliance with procurement laws and regulations governing 8(a) contracts, including the limitation on subcontracting clause. 9. Evaluate staffing levels and training needed to effectively oversee ANC participation in the 8(a) program and take steps to allocate appropriate resources to the Alaska district office. SBA response: SBA stated that the planned Business Development Management Information System should help the Alaska district office more effectively oversee ANC participation in the 8(a) program. It stated that it is providing training to the Alaska district office. However, no plans were in place to evaluate staffing levels at the office. 10. Provide more training to agencies on the 8(a) program, specifically including a component on ANC 8(a) participation. SBA response: SBA has provided training to agencies on the revised 8(a) partnership agreements; however, our review of the slides SBA used for the training found no reference to ANC 8(a) firms specifically. According to an SBA official, SBA will include a component on ANC 8(a) participants in future training sessions. We also recommended that procuring agencies provide guidance to contracting officers to ensure proper oversight of ANC contracts. The procuring agencies generally agreed with the recommendation. Some agencies are waiting for SBA to implement our recommendations before they take their own actions, but others have taken steps to tighten their oversight of contracts with 8(a) ANC firms. The Department of Homeland Security, for example, recently issued an “acquisition alert” requiring that its heads of contracting activities provide guidance and training on the use of 8(a) firms owned by ANCs. The alert provides that use of the authority to award sole-source 8(a) contracts to ANCs must be judicious with appropriate safeguards to ensure that the cost/price is fair and reasonable, that the ANC has the technical ability to perform the work, that the ANC will be performing the required percentage of the work and that the award is in the best interests of the government. The Department of Energy revised its acquisition guidance regarding small business programs to remind contracting officers to use care in awarding and administering ANC contracts, to include notifying SBA of contract modifications and monitoring the limits on subcontracting. The Department also provided training on the 8(a) program, to include contracting with ANC firms. By providing contracting officers with appropriate training on these issues, the government is taking steps to ensure that the ANC firms are operating in the program as intended, thereby mitigating the risk of unintended consequences or abuse of some of the privileges provided to these firms. This concludes my testimony. I would be happy to answer any questions you may have. For further information regarding this testimony, please contact Katherine V. Schinasi at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors were Michele Mackin, Sylvia Schatz, and Tatiana Winger. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Alaska Native corporations (ANC) were created to settle land claims with Alaska Natives and foster economic development. In 1986, legislation passed that allowed ANCs to participate in the Small Business Administration's (SBA) 8(a) program. Since then, Congress has extended special procurement advantages to 8(a) ANC firms, such as the ability to receive sole-source contracts for any dollar amount and to own multiple subsidiaries in the 8(a) program. We were asked to testify on an earlier report where we identified (1) trends in the government's 8(a) contracting with ANC firms, (2) the reasons agencies have awarded 8(a) sole-source contracts to ANC firms and the facts and circumstances behind some of these contracts, and (3) how ANCs are using the 8(a) program. GAO also evaluated SBA's oversight of 8(a) ANC firms. GAO made recommendations aimed at improving SBA's oversight of 8(a) ANC contracting activity and ensuring that procuring agencies properly oversee 8(a) contracts they award to ANC firms. SBA has either taken action or plans to take action on the recommendations. The procuring agencies generally agreed with our recommendation to them. We believe implementation of our recommendations will provide better oversight of 8(a) ANC contracting activity and provide decision makers with information to know whether the program is operating as intended. While representing a small amount of total federal procurement spending, obligations for 8(a) contracts to ANC firms increased from $265 million in fiscal year 2000 to $1.1 billion in 2004. Over the 5-year period, agencies obligated $4.6 billion to ANC firms, of which $2.9 billion, or 63 percent, went through the 8(a) program. During this period, six federal agencies--the departments of Defense, Energy, the Interior, State, and Transportation and the National Aeronautics and Space Administration--accounted for over 85 percent of 8(a) contracting activity. Obligations for 8(a) sole source contracts by these agencies to ANC firms increased from about $180 million in fiscal year 2000 to about $876 million in fiscal year 2004. ANCs use the 8(a) program as one of many tools to generate revenue with the goal of providing benefits to their shareholders. Some ANCs are heavily reliant on the 8(a) program for revenues, while others approach the program as one of many revenue-generating opportunities. GAO found that some ANCs have increasingly made use of the congressionally authorized advantages afforded to them. One of the key practices is the creation of multiple 8(a) subsidiaries, sometimes in highly diversified lines of business. From fiscal year 1988 to 2005, ANC 8(a) subsidiaries increased from one subsidiary owned by one ANC to 154 subsidiaries owned by 49 ANCs. In general, acquisition officials at the agencies reviewed told GAO that the option of using ANC firms under the 8(a) program allows them to quickly, easily, and legally award contracts for any value. They also noted that these contracts help them meet small business goals. In reviewing selected large sole-source 8(a) contracts awarded to ANC firms, GAO found that contracting officials had not always complied with certain requirements, such as notifying SBA of contract modifications and monitoring the percentage of work that is subcontracted. SBA, which is primarily responsible for implementing the 8(a) program, had not tailored its policies and practices to account for ANCs' unique status and growth in the 8(a) program, even though SBA officials recognized that ANCs enter into more complex business relationships than other 8(a) participants. Areas where SBA's oversight fell short included determining whether more than one subsidiary of the same ANC was generating a majority of its revenue in the same primary industry, consistently determining whether awards to 8(a) ANC firms had resulted in other small businesses losing contract opportunities, and ensuring that the partnerships between 8(a) ANC firms and large firms were functioning in the way they were intended.
The senior U.S. military authority in the Pacific Area of Responsibility is the Commander of U.S. Pacific Command. Pacific Command is one of six U.S. geographic combatant commands. Pacific Command’s area of responsibility spans roughly half the earth’s surface and encompasses 36 countries, including Australia, China, India, Japan, the Philippines, and South Korea. The Pacific Command is supported by four service component commands: U.S. Army Pacific; U.S. Pacific Air Forces; U.S. Marine Forces, Pacific; and the U.S. Pacific Fleet. Each component command is generally responsible for its service’s actions and missions within the Pacific Command area of responsibility and is supported by subordinate commands, which help support the service’s presence in the region. For example, U.S. Marine Forces, Pacific, is supported by the III Marine Expeditionary Force, a large Marine Corps unit forward deployed to Japan and other parts of Asia, which stands ready to conduct operations. Also supporting Marines in the Pacific is U.S. Marine Corps Installations, Pacific, which is responsible for the command and control of all Marine Corps installations in the region. In Japan, U.S. Forces– Japan—a subunified command under the Pacific Command—supports U.S. forward presence and ensures bilateral defense cooperation with the government of Japan. According to U.S. Forces–Japan, it focuses on war planning, the conduct of joint/bilateral exercises and studies, administering the Status of Forces Agreement, improving combat readiness, and enhancing the quality of life of military and DOD civilian personnel and their dependents. See figure 1 for information on major U.S. forces and installations in Japan, Okinawa, and Guam; approximate distances between Guam and other strategic locations in the Pacific; and U.S. strategic allies in the Pacific. The U.S.-Japan alliance dates back to the U.S. occupation of Japan after its defeat in World War II. The alliance is supported by the 1960 Treaty of Mutual Cooperation and Security and a related Status of Forces Agreement which today covers about 51,200 U.S. servicemembers, 5,400 DOD civilian employees, and 42,200 dependents in Japan, as of January 2013. As a result of the treaty, the Status of Forces Agreement, and related agreements, U.S. forces have the use of nearly 90 installations throughout both mainland Japan and Okinawa, for the purpose of contributing to the security of Japan and the maintenance of international peace and security in the region. Under the treaty and Status of Forces Agreement, the United States is granted the use of facilities and areas in Japan, with specific facilities and areas to be determined by the two governments. Generally, according to U.S. Forces–Japan officials, Japan constructs the facilities, while the United States bears the costs of maintenance—with each facility typically having a 50-year service life. One issue that remains at the forefront of the alliance is the realignment of U.S. forces in Japan. Efforts to realign U.S. forces in Japan date back to 1995. Discontent among the people of Okinawa regarding the U.S. military presence led to the establishment of the Special Action Committee on Okinawa in November 1995 by the Security Consultative Committee, a bilateral group of high-ranking U.S. and Japanese officials involved with overall bilateral policy regarding the security relationship between the two countries. In December 1996, this committee approved the final report of the Special Action Committee on Okinawa, which included recommendations on how to consolidate, realign, and reduce U.S. facilities and areas and adjust the operational procedures of U.S. forces in Okinawa in order to reduce the burden on local communities. Realignment efforts did not gain much traction until the end of 2002, when the United States and Japan launched an ambitious series of realignment initiatives called The Defense Policy Review Initiative (DPRI). Under DPRI, both countries were seeking to reduce the U.S. footprint in Okinawa, enhance interoperability and communication, and better position U.S. forces to respond to a changing security environment. The major realignment initiatives under DPRI are outlined in the United States—Japan Roadmap for Realignment Implementation (2006 Roadmap) which was issued in May 2006 by the Security Consultative Committee, reaffirmed and implemented in part in a 2009 bilateral agreement, and recently adjusted in the April 2012 statement. There are four initiatives under DPRI that are specific to the Marine Corps and its current plans to realign its forces in the Pacific: 1. Futenma Replacement Facility, 2. Realignment of Marine Corps units, 3. Okinawa Consolidation, and 4. Carrier Air Wing Move from Atsugi to Iwakuni. As envisioned by the 2006 Roadmap, the U.S. government would return to Japan the Marine Corps Air Station Futenma in Okinawa once the government of Japan constructed a fully operational replacement facility (Futenma Replacement Facility), including a runway, in a northern, less populated area of the island. This facility was originally projected to be complete by 2014. According to Marine Corps officials, some facilities have been constructed at the planned site of the realignment—Camp Schwab; however, the construction of the replacement runway has stalled. Those same officials stated that before construction of the runway can proceed, the government of Japan has to issue an environmental impact statement for the construction of the runway, and the Okinawa government has to approve a landfill permit. According to DOD officials, in December 2012, Japan’s Ministry of Defense submitted the environmental impact statement to the Governor of Okinawa. Subsequently, in March 2013, DOD officials informed us that the government of Japan submitted the application for the landfill permit to the Governor of Okinawa. Figure 2 shows the planned location of the runway at Camp Schwab. The Marine Corps estimates that its Operation and Maintenance and Procurement costs for the Futenma Replacement Facility will be approximately $178 million over the next 5 years; however, this estimate does not constitute the total cost to the United States and, according to Marine Corps officials, has not been approved. Because of the uncertainties surrounding the construction of the runway at Camp Schwab and following direction from the Senate Armed Services Committee, DOD has examined the feasibility of relocating air assets from Marine Corps Air Station Futenma to Kadena Air Base, as an alternative to constructing the Futenma Replacement Facility at Camp Schwab. However, DOD concluded that it was not a viable solution. In the April 2012 statement, the representatives from the United States and Japan reconfirmed their view that the Futenma Replacement Facility remains the only viable solution that has been identified to date. In addition, the April 2012 statement noted that both governments expressed their commitment to contribute to refurbishment projects at Marine Corps Air Station Futenma to sustain safe mission capability until the Futenma Replacement Facility is fully operational and to protect the environment. According to Marine Corps officials, as of February 2013, a list of refurbishment projects to be funded by the U.S. government and the government of Japan has been identified, and planning for these projects is expected to be completed by April 2013. Though time frames may vary, Marine Corps officials expect work on these projects could start sometime in 2014. After several years of planning to move approximately 8,000 Marines off Okinawa to Guam, DOD revised its plan in April 2012 to relocate some units from Okinawa to Guam, Hawaii, and the Continental United States. Additionally, the plan includes establishing a rotational Marine Corps presence in Australia, a move that, according to DOD officials, stems from a November 2011 agreement between the United States and Australia. To date, the Marine Corps has established a small presence on Guam to prepare for the Marine realignment, but it has not yet relocated any units from Okinawa to Guam, nor has it been able to reduce its presence on Okinawa as anticipated under the April 2012 statement. According to Marine Corps officials, Marines cannot be relocated until suitable replacement facilities are constructed and made operationally capable on Guam and in other locations. The Marine Corps’ current plan is to build facilities on Guam and live-fire training ranges on Guam, Tinian, and Pagan—members of the Mariana Islands—to support the realignment of approximately 5,000 personnel (mostly rotational) and any dependents to Guam. Before any Marines can relocate to Guam, DOD must examine the environmental effects of its proposed actions, pursuant to the National Environmental Policy Act of 1969. To address this requirement in the past, DOD performed an environmental review of certain proposed actions under the original 2006 realignment plan and released the Guam and Commonwealth of the Northern Mariana Islands Military Relocation Final Environmental Impact Statement in July 2010. In September 2010, the Department of the Navy announced in the record of Decision for the Guam and Commonwealth of the Northern Mariana Islands Military Relocation that it will proceed with the Marine Corps realignment, but deferred the selection of a specific site for a live-fire training range complex on Guam pending further study. In February 2012, the Department of the Navy gave notice that it intended to prepare a Supplemental Environmental Impact Statement to evaluate locations for a live-fire training range complex on Guam.Department of the Navy gave notice that it was planning to expand the In October 2012, as a result of the current realignment plan, the scope of the ongoing Supplemental Environmental Impact Statement evaluating locations for the live-fire training range complex, to determine the potential environmental consequences from construction and operation of a main cantonment area, including family housing, and associated infrastructure on Guam to support the recently revised realignment plan. According to Marine Corps officials, the Supplemental Environmental Impact Statement is expected to be completed by 2014, and it is anticipated that a final decision on all matters being evaluated will be released by 2015.established by the Navy in August 2006, leads this effort. The Joint Guam Program Office, which was DOD, using costing data derived from previous cost estimates for Guam, estimates that the total cost to relocate Marines to Guam as part of the realignment plan will be $8.6 billion in fiscal year 2012 dollars. According to DOD officials, the government of Japan is expected to provide approximately $3.1 billion for this realignment. As of June 2012, the United States had received $833.90 million from the Government of Japan for this initiative; however, provisions in the National Defense Authorization Acts for Fiscal Years 2012 and 2013 restricted the use of funds provided by the government of Japan to implement the realignment from Okinawa to Guam until DOD provided certain information to the congressional defense committees. Although the National Defense Authorization Act for Fiscal Year 2013 restricts the use of funds, it contains exceptions allowing DOD to use funds to complete additional environmental analysis for proposed actions on Guam or Hawaii, initiate planning and design of construction projects at Andersen Air Force Base and on Andersen South, and to carry out certain military construction projects as specified in the act. As part of the current realignment plan, DOD plans to move some Marine Corps units to Hawaii and the continental United States. As of March 2013, the Marine Corps has not moved any units from Okinawa to either Hawaii or the continental United States. Additionally, DOD plans to establish a rotational presence of up to a 2,500 person Marine Air Ground Task Force in an undetermined location in Australia. As an initial step toward establishing a Marine Air-Ground Task Force in Australia, the Marine Corps rotated approximately 200 Marines from Fox Company, 2nd Battalion, 3rd Marine Division from their home station at Marine Corps Base Hawaii, Kaneohe Bay, to Darwin, Australia for a 6-month rotation from April to September 2012. The April 2012 statement noted that the United States is committed to returning lands on Okinawa to Japan as designated Marine Corps forces are relocated and as facilities become available for units and other tenant activities relocating to other locations on Okinawa. Figure 3 depicts all U.S. installations on Okinawa and identifies which installations have been designated to be partially or fully returned to Japan according to the April 2012 statement. According to the statement, the two governments will jointly develop a consolidation plan, including sequencing of realignment steps, for facilities and locations remaining in Okinawa by the end of 2012.costs for the consolidation, because consolidation plans remain under development. DOD officials said that they have not been able to estimate U.S. Although DOD has developed a preliminary rough-order-of-magnitude cost estimate for its current plan to relocate Marines from Okinawa and realign them to Guam and other locations in the Pacific, it is not reliable, because it is missing costs and is based on limited data. According to DOD officials, DOD has not yet been able to put together a more reliable cost estimate for its current plan, because it will not have the information needed to do so until the completion of several key studies, including environmental analyses for Guam and Hawaii. In addition, host nation negotiations—specifically those with the government of Australia—have not been completed. As part of its preliminary rough-order-of-magnitude cost estimate, DOD currently estimates that it would cost approximately $12.1 billion to implement the current realignment plan—not including the Australia segment of the realignment. However, this estimate is not reliable because it omits potentially critical cost elements and risk parameters tied to the assumptions, and lacks detailed information on requirements for several key cost components that are needed to capture all costs related to the realignment. GAO’s cost estimation guide specifies four characteristics of a reliable cost estimate—the extent to which an estimate is comprehensive, well documented, accurate, and credible. We found DOD’s estimate only partially meets best practices for being comprehensive, and according to best practices, when a cost estimate is not comprehensive, it cannot fully meet the other characteristics of a reliable cost estimate. As a result, we did not evaluate the other three characteristics. According to DOD officials, specific requirements and their associated costs cannot be developed for each cost component until the necessary analyses and host nation negotiations have been completed. Still, we found that DOD did not include some up-front practices that could have provided a more reliable estimate and could have been done despite the fact that the environmental analyses and host nation negotiations are not complete. For example, DOD officials identified mobility support as a critical component to the implementation of the realignment; however, no costs associated with mobility support were included in the estimate. Additionally, DOD based its estimate on several assumptions, but there was no evidence that DOD identified risk impacts or parameters for any of these assumptions. Overall, the cost components not fully addressed in the current estimate include the Guam physical layout and requirements; the housing requirements on Guam; the requirements to upgrade utilities and infrastructure on Guam; the Joint Training Range Complex in the Northern Marianas; the Marine Corps requirements for Australia; the Marine Corps requirements for Hawaii and other U.S. locations; and mobility support. In addition, DOD continues to seek funding for utilities and infrastructure projects on Guam that once supported the original Marine Corps realignment plan. DOD officials believe that some of these projects should not be affected by the current realignment plan. However, assessments to determine the extent to which these projects remain valid for the current realignment plan have not been completed. As a result, it is unknown to what extent these projects need adjusting, if at all, to support the new realignment plan. Although DOD officials expect to develop a more reliable estimate as the necessary environmental analyses are completed and host nation negotiations have been finalized, they have not determined when this estimate will be available for Congress. DOD has revised its plans and updated its cost estimates several times to relocate Marines from Okinawa and realign them to other locations in the Pacific. Table 1 lists the significant features, the number of Marines expected to leave Okinawa, and the estimated costs associated with its plans. As noted in table 1, under the original realignment plan developed in 2006, the III Marine Expeditionary Force would have moved most of its headquarters elements from Okinawa to Guam, but not the other elements, such as ground forces and aviation units, that are needed to create a full Marine Air-Ground Task Force capability. In 2010, DOD developed a revised cost estimate, referred to as the Original Realignment Plan Revised Cost Estimate (see table 1). This estimate is based on planning, studies, and defined facilities requirements and included additional cost elements, such as operation and maintenance, procurement, overseas housing allowance, and family housing and operations. On the basis of this more detailed analysis, DOD’s estimate for the original realignment plan increased from $10.3 billion in 2006 to $19 billion in 2010. Marine Corps officials told us they believed that not having a full Marine Air-Ground Task Force capability on Guam, a feature of the original realignment plan, would have hindered the Marines’ ability to properly train or deploy as a combined-arms force. Therefore, between 2010 and 2012, DOD examined different options for realigning Marines in the Pacific. Although different options were considered, the costs remained significant to the United States and no option was officially adopted. In April 2012, DOD briefed its current realignment plan to the congressional defense committees. The current realignment plan calls for the establishment of Marine Air-Ground Task Forces in Guam and Hawaii, and a rotational presence in Australia. In 2012, DOD officials developed a preliminary rough-order-of-magnitude cost estimate for the current realignment plan, which relocates fewer troops to Guam and realigns Marines to other locations in the Pacific. This preliminary rough- order-of-magnitude estimate for the current plan, which was briefed to congressional defense committees in the spring of 2012, is $12.1 billion; however, this estimate does not include any potential costs for the Australia segment of the realignment. Table 2 provides more location- specific information on cost estimates and personnel breakdown for the current realignment plan. DOD officials informed us that the estimate for the current realignment plan is lower than estimates for prior plans because of several factors. One factor is that more Marines, approximately 1,300, will remain in Okinawa than in previous plans, resulting in overall lower operation and maintenance costs. For example, on Guam and Hawaii, the United States will likely assume full responsibility for such things as labor and utilities costs, whereas the government of Japan provides significant host nation support for these two cost categories in Japan. Since more Marines will remain on Okinawa, operation and maintenance costs will not be as significant, causing the overall cost estimate to likely go down. Another factor is a reduction in expected military construction costs on Guam. According to DOD officials, since approximately 5,000 fewer Marines are expected to relocate to Guam, and most of those who do will be deployed on a rotational basis, there will be a reduced demand for additional facilities and housing. Furthermore, DOD officials stated that the area cost factor for Guam had declined in 2011, helping to reduce the overall cost estimate.realignment plan will be lower than the estimates for previous plans. However, DOD has not been able to apply the same rigor as was used in developing previous estimates because, unlike for previous estimates, DOD officials believe the cost estimate for the current key studies, like environmental analyses, and host nation negotiations that will affect requirements have not yet been completed. According to documentation attached to DOD’s fiscal year 2013 budget, DOD cannot continue the practice of starting programs that prove to be unaffordable. Therefore, DOD plans to achieve program affordability by working to ensure that programs start with firm cost goals in place, appropriate priorities set, and necessary trade-offs made to keep them within affordable limits. Furthermore, this documentation states that understanding and controlling future costs from a program’s inception is critical to achieving the goal of affordability. According to GAO’s Cost Estimating and Assessment Guide, whether or not a program is affordable depends a great deal on the quality of its cost estimate. A reliable cost estimate is critical to the success of any government program, because it provides the basis for informed investment decision making, realistic budget formulation and program resourcing, and meaningful progress measurement. Office of Management and Budget guidance containing best practices indicates that programs should maintain current and well-documented estimates of program costs. In addition, our research has identified a number of best practices that provide a basis for effective program cost estimating and should result in reliable cost estimates that an organization can use to make informed decisions. These practices can be organized into the four characteristics of a reliable cost estimate. The cost estimate should be comprehensive, well-documented, accurate, and credible, as explained in table 3. Our assessment of DOD’s preliminary estimate for the current realignment plan is that it is not reliable, because it is missing costs and based on limited data. To arrive at this conclusion, we assessed DOD’s preliminary estimate against one of the four characteristics of a reliable cost estimate, comprehensiveness. As shown in table 4, we found that the preliminary estimate only minimally met best practices for being comprehensive. According to best practices, when a cost estimate is not comprehensive, it cannot fully meet the other characteristics of a reliable cost estimate. For example, because the cost estimate is missing some cost elements, the documentation is incomplete, and since the requirements for the realignment are still being determined, the estimate cannot be considered accurate. Finally, the cost estimate is not credible because it does not include a full risk and uncertainty analysis, and the potential exists that some of the costs have been underestimated. Table 4 provides a summary of our assessment of DOD’s preliminary cost estimate. As discussed in table 4, we found that DOD’s preliminary rough-order-of- magnitude estimate does not fully include requirements and associated costs for all segments of the realignment plan, because this information will not become available until several environmental analyses and specific host nation negotiations are completed. While the reason for the missing requirements and their associated costs is understandable, we also found that DOD omitted specific costs and risk data that could have resulted in a more reliable estimate and are not dependent on completion of the environmental analyses and host nation negotiations. Most notably, the estimate did not include any costs for mobility support, a component that DOD officials said was necessary for the implementation of the realignment. According to some DOD officials we spoke to, the potential cost for additional mobility support could be considerable. In addition, we found that the estimate was based on several assumptions, but there was no evidence that DOD identified risk impacts or parameters for any of these assumptions. For example, DOD assumed it would need $600 million to cover utilities and infrastructure costs for Guam. There were no risk parameters put on this assumption indicating the estimate could be higher or lower. Considering that historical data from previous realignment plans estimated utilities and infrastructure costs for Guam at over $1 billion, risk parameters could have identified the potential for higher costs to DOD. Overall, we found that the preliminary estimate does not adequately reflect the program because it cannot yet fully account for the requirements and costs associated with the following seven cost components: 1. Guam Physical Layout and Requirements. 2. Housing Requirements on Guam. 3. Requirements to Upgrade Utilities and Infrastructure on Guam. 4. Joint Training Range Complex in the Northern Marianas. 5. Marine Corps Requirements for Australia. 6. Marine Corps Requirements for Hawaii and Other U.S. Locations. 7. Mobility Support. We discuss each of the seven cost areas in more detail below. DOD does not yet know the full facilities requirements for Guam or potential environmental mitigations; therefore, it does not have sufficient information to determine the full costs of building the facilities and training ranges needed to support the Guam segment of the realignment. The cost estimate for the Guam segment of the current realignment plan— including the cost of establishing training ranges in the Commonwealth of the Northern Mariana Islands and investing in local civilian infrastructure—is approximately $8.6 billion, of which a portion will be funded by the government of Japan. However, this estimate does not fully capture the total costs of the Guam segment of the realignment, because DOD has not determined the physical layout of the Marine Corps presence on Guam, or fully identified specific infrastructure requirements yet. To estimate the costs for the Guam portion of the realignment, DOD officials assumed that the main Marine Corps installation, or cantonment, would be constructed at Finegayan (an area in the northern part of Guam just south of Andersen Air Force Base), and that the training range requirements would not have changed from those in prior plans. However, since the current plan calls for fewer Marines to relocate to Guam, and the composition of the Marine units that will relocate has changed, different locations might be selected for the main Marine Corps cantonment and live-fire training ranges. Those locations under consideration include military space at Andersen Air Force Base and Naval Base Guam. As of January 2013, DOD is assessing alternatives and the possible effects that the current plan might have on Guam’s infrastructure and on the environment in order to develop the Supplemental Environmental Impact Statement. The results of DOD’s assessments could affect where DOD builds, what the facilities requirements will be, and what environmental mitigations DOD might take. DOD officials told us that they will wait until this environmental impact statement is completed before proceeding with any further planning and assessments that may be needed to identify specific requirements. DOD does not know the minimum level of government-provided housing needed to accommodate the relocated Marines and to inform its housing investment strategies to ensure that the housing needs of servicemembers and their families are cost-effective on Guam. Of the $8.6 billion estimate for the Guam segment of the current realignment plan, DOD includes approximately $400 million to construct military housing for Marines on Guam; however, DOD has not performed the necessary housing analysis to validate this estimate yet, nor has it determined how Navy housing policies and Joint Region Marianas housing practices will affect Marine Corps housing requirements. According to Joint Region Marianas officials, Joint Region Marianas is responsible for maintaining and sustaining housing on Guam. In 2010, DOD performed an analysis to determine the minimum level of government-provided military housing needed to serve both unaccompanied and accompanied servicemembers on Guam. However, this analysis was based on the original realignment plan and is no longer valid. The current realignment plan involves a smaller population and will consist primarily of rotational personnel. Navy officials told us that a realistic housing analysis indentifying construction requirements for the Guam segment of the realignment would not be able to be determined until closer to the time when Marine Corps families are expected to move to Guam, which is not expected for another 10 years. In the interim, Marine Corps officials have said that they will plan to build to 100 percent of the Marine Corps’ housing requirement on Guam, an assumption that will be used while completing necessary environmental reviews for the Guam segment of the realignment. According to Joint Region Marianas officials, their organization does not have the authority or responsibility to determine how much housing the Marine Corps should build on Guam. Navy housing policy is to encourage and rely on private-sector housing whenever possible. Along these lines, the practice of Joint Region Marianas is to rely on the private sector for housing for servicemembers whenever possible. A majority of the current Navy and Air Force servicemembers choose to live in private housing on Guam. Joint Region Marianas officials were uncertain what the effects would be if the Marine Corps proceeds with building 100 percent of the housing it needs for the relocated Marines, and whether the Marine Corps would attempt to prohibit Marines from living in private, off-base housing. Furthermore, according to DOD officials, if the decision is to house Marines on any existing installations on Guam, such as Andersen Air Force Base or Naval Base Guam, additional support infrastructure, such as dining facilities and gymnasiums, will likely be required. Joint Region Marianas also has a surplus of unoccupied military housing that could potentially be used to house Marines (see fig. 7 for photographs of select unoccupied military housing units on Guam). Unoccupied housing will be discussed in greater detail later in this report. DOD has not updated its list of utilities and infrastructure requirements that is tied to the current realignment plan; therefore DOD does not have an accurate estimate of how much it will cost to upgrade Guam’s utilities and infrastructure to support the planned Marine Corps presence on the island. These public infrastructure requirements include utilities and infrastructure improvements, as well as projects to augment public health and social services, and mitigate the social, economic, cultural, and environmental effects of the Marine Corps realignment to Guam. In June 2009, we reported that DOD had determined that existing utilities and infrastructure on Guam were near or at their maximum capacities already, and would require significant enhancements to support the increase in the island’s population expected under the original realignment plan. In addition, during the development of the 2010 Final Environmental Impact Statement, DOD, the Guam Waterworks Authority, and the U.S. Environmental Protection Agency developed a list of water and wastewater projects to address deficiencies in Guam’s water and wastewater infrastructure and respond to the increased population driven by the original realignment plan. This list of projects, estimated to cost approximately $1.3 billion, spanned the first 5 years of an overall 30-year $5.3 billion capital improvement plan. The government of Japan was expected to provide $600 million of the needed $1.3 billion, with the remainder to be provided by DOD. Funding outside this 5-year timeframe was expected to be covered by the government of Guam and non-DOD federal government agencies. According to DOD Office of Economic Adjustment officials, although other federal agencies, such as the Department of the Interior’s Office of Insular Affairs, could help fund these improvements on Guam, as of February 2013, there has been no financial assistance for these projects from non-DOD agencies. Although $1.3 billion in water and wastewater projects were connected to the original realignment plan, the Marine Corps, as part of the $8.6 billion estimate for the Guam segment of the current realignment plan, only identified $600 million for all utilities and infrastructure. According to Marine Corps officials, this $600 million was to only fund water and wastewater projects that the government of Japan was previously expected to finance, but is no longer obligated to pay for as a result of the adjustments to the 2006 Roadmap announced in April 2012. Marine Corps officials also said that the decision to include only $600 million was a planning decision and not based on any updated analysis of public infrastructure requirements for the current realignment plan. These same officials said that any updated analysis would not be available until the Supplemental Environmental Impact Statement for Guam was completed. According to Office of Economic Adjustment officials, no matter how many Marines relocate to Guam, significant improvements to the water and wastewater infrastructure will be necessary; many of these improvements are included in the $1.3 billion water and wastewater infrastructure improvement estimate that was linked to the original realignment plan. DOD plans and designs for a joint training range complex on Tinian and Pagan in the Commonwealth of the Northern Mariana Islands have yet to be finalized, and the costs for this complex are not fully known. To support the original realignment plan, DOD developed plans and designs for live-fire training ranges on Tinian and Pagan, since not all training could be accommodated by the proposed live-fire training ranges on Guam. DOD also conducted several studies to identify training shortfalls in the region and determined the Marine Corps training requirements for units designated for realignment to Guam. According to Marine Corps officials, these plans and requirement studies have been under review since August 2012, and the studies have been updated to reflect the changes made under the current realignment plan. Marine Corps officials said that they intend to reexamine their plans and designs as they prepare an updated environmental impact statement for this complex; this environmental impact statement will be a separate effort from the Guam Supplemental Environmental Impact Statement, and, according to DOD officials, will be entitled the Commonwealth of the Northern Marianas Joint Military Training Environmental Impact Statement. For the joint training range complex environmental impact statement, Marine Corps officials plan to examine alternatives and identify environmental mitigations needed to proceed with the development of the complex. Marine Corps officials anticipate issuing a Record of Decision by early 2016. As part of the $8.6 billion estimate for the Guam segment of the current realignment plan, DOD identifies military construction costs for the complex to be approximately $800 million. However, this estimate does not include updated costs associated with environmental mitigations or operation and maintenance associated with the current realignment plan. For example, Marine Corps officials said that developing an amphibious landing training area on Tinian could require coral realignment, which could cost approximately $10 million (see fig. 8 for photographs of select proposed amphibious landing training areas). According to these same officials, the Marine Corps will need to wait until environmental analyses are completed before fully determining the costs for environmental mitigations associated with the complex. Additionally, the Marine Corps estimated some operation and maintenance costs to maintain and sustain the complex under previous realignment plans, but has not updated these estimates based on the current plan. According to Marine Corps officials, there are currently no permanent DOD training facilities on Tinian, but the Marine Corps intends to build permanent facilities to store equipment and house civilians and servicemembers deployed to the island to train. Furthermore, DOD has not developed a Concept of Operations for the complex, which could be used to determine other associated costs, such as transportation to and from the island. DOD has developed some initial, rough-order-of-magnitude cost estimates to establish a rotational presence of approximately 2,500 Marines in Australia; however, these cost estimates cannot be considered comprehensive, because they are not based on finalized plans or requirements. According to Marine Corps officials, facility requirements for the Marine Corps have been provided to the government of Australia; however, nothing has been finalized because there has yet to be a formal agreement on host nation support options for the provided requirements. In preparing to establish a rotational Marine Air-Ground Task Force in Australia, the Marine Corps rotated approximately 200 Marines to Darwin, Australia, for 6 months beginning in April 2012. DOD officials have also visited Australia to conduct site assessments and hold discussions with their Australian counterparts. According to DOD officials, DOD cannot finalize plans or fully determine facilities, housing, and training requirements until negotiations with the government of Australia have concluded. For example, Marine Corps officials said that the training requirements for Marines in Australia will not differ from those of Marines on Guam; therefore, officials did not know what additional infrastructure and support facilities in Australia, if any, would be needed to address these requirements. DOD officials acknowledged that the level of host nation support has not been determined, including Australian preferences for which Australian facilities will host the Marines, whether new facilities need to be built and by whom, and how to coordinate training ranges for exercises—all of which could significantly affect the costs of rotating a presence to Australia. Other details related to the Marine Corps presence will also need to be negotiated. For example, according to Marine Corps officials, the Marine Corps is considering prepositioning equipment in Australia, since it may be cost-prohibitive to transport equipment with each new deployment, given the costs associated with agricultural quarantine inspections and transportation; however, Marine Corps and Defense Logistics Agency officials we spoke with said it was too early to determine any details related to prepositioned stock because no requirements had been established yet. DOD has developed some initial, rough-order-of-magnitude cost estimates to relocate Marines to Hawaii and the continental United States; however, these cost estimates cannot be considered comprehensive because they are not based on finalized plans or requirements. According to Marine Corps officials, information needed to help develop a comprehensive cost estimate will not be available until necessary environmental analyses have been completed. The Marine Corps and Navy have conducted initial assessments of possible locations to expand the Marine Corps presence in Hawaii, but facility, housing, and training requirements to support the realignment remain undefined. According to Marine Corps officials, Naval Facilities Engineering Command Pacific is currently in the process of conducting three preliminary studies for the Marine Corps which will examine many potential options (among all DOD lands on Oahu) for basing additional Marines in Hawaii.locations for units relocating to the continental United States. According to Marine Corps officials, before the Marine Corps can relocate units to Hawaii and the continental United States, DOD will need to perform environmental analyses and develop plans. The Marine Corps has yet to identify possible DOD did not include any estimates for mobility support in its preliminary estimate and could not provide sufficient information on how it intends to provide mobility support—which could be costly—to Marine Corps units once the current realignment plan has been implemented. According to Marine Corps officials, the current realignment plan was developed under the assumption that sufficient mobility capabilities would be available to support Marine Corps units stationed in the Pacific; however, as of August 2012, although DOD officials said studies looking at lift in the Pacific were underway, DOD had not completed any studies to determine the implications for mobility of distributing Marines to multiple locations in the Pacific. Marine Corps officials responsible for managing and implementing the current realignment plan could not provide us with information on how the Marines would travel to and from routine operations, such as training events, and contingency operations once the current realignment plan was implemented. As of August 2012, DOD officials said that the Department of the Navy was conducting a study of possible mobility solutions to support the current realignment. However, at the time of our review, the study remained in draft format and under review at DOD. Also, United States Transportation Command and its subordinate commands have not assessed the implications of the current realignment plan’s mobility requirements for its current operations and assets, because no request has been made to perform such a study. According to United States Transportation Command Officials, they did not assess mobility requirements associated with distributing Marines to multiple locations in the Pacific in the command’s last Mobility Capability Requirements Study, and to their knowledge, mobility requirements supporting the current realignment plan were not assessed for the command’s current study, due to be published in summer 2013. The Marine Corps uses a variety of assets to transport personnel and equipment in the Pacific region. For example, the Marine Corps has in the past chartered the Westpac Express, a commercial shipping vessel, to transport personnel and equipment to various locations for training exercises or contingency operations. Marine Corps officials stated that they assumed that they would be able to use recently acquired Joint High Speed Vessels to transport troops and equipment. The Marine Corps can also use air and sea assets provided by U.S. Transportation Command. Another option is the Amphibious Ready Group, which consists of a group of Navy warships, including amphibious assault and dock landing ships, and a landing force used to perform amphibious operations. The Navy has one forward-deployed Amphibious Ready Group stationed in Sasebo, Japan, that supports the 31st Marine Expeditionary Unit. Pacific Command officials said that a possible solution would be to deploy another Amphibious Ready Group to the command’s area of responsibility, but this could be costly if new ships and supporting infrastructure are required. According to the Congressional Budget Office, the average cost of constructing an Amphibious Assault Ship alone is $4.3 billion. DOD civilian, military, and State Department officials have all stressed the importance of ensuring that sufficient mobility capabilities are available to support the current realignment plan. State Department officials said that U.S. allies in the region may become concerned with DOD’s ability to address threats, given the distribution of Marine Corps units across the Pacific region, as proposed under the current realignment plan. Military officials also warn that a lack of mobility capabilities could affect the ability of the U.S. military to both adequately train and execute missions. DOD has sought funding to upgrade utilities and infrastructure on Guam prior to updating its assessment of requirements needed to support the personnel and facilities changes in the current realignment plan. According to Office of Economic Adjustment officials, $106.4 million in funding was sought for the first stages of the Guam water and wastewater improvements in fiscal year 2013. In addition to the water and wastewater improvements, DOD sought an additional $33 million of funding for the completion of mental health facilities, and the construction of a public health laboratory on Guam in fiscal year 2013. The $33 million is the second half of a $66 million island-wide socioeconomic improvement plan coordinated through the Economic Adjustment Committee based on the sudden population growth associated with the original realignment plan. The other $33 million, for projects that included the construction of a cultural repository, the purchase of school buses (for emergency evacuations and military dependents), and improvements to the Guam Mental Health and Substance Abuse Facility, was sought in fiscal year 2012. The National Defense Authorization Act for Fiscal Year 2013 authorized appropriations for the Office of Economic Adjustment, but did not include specific authorizations for the projects. officials, funding for these projects has been put on hold until restrictions According to DOD imposed by the National Defense Authorization Act for Fiscal Year 2013 have been addressed. According to Office of Economic Adjustment officials, some of the socioeconomic and utilities projects, such as the cultural repository, and water and wastewater improvements, should not be affected by the changes in personnel and facilities associated with the current realignment plan; however, some projects, such as the improvements to a mental health and substance abuse facility, may not be necessary due to fewer Marines and dependents relocating to Guam. Office of Economic Adjustment officials said that they intend to perform assessments, in conjunction with the Joint Guam Program Office, to reexamine and validate all utilities and infrastructure projects during DOD’s development of the Supplemental Environmental Impact Statement for Guam. However, until these assessments are completed, DOD is seeking funding for utilities and infrastructure projects that have either not been fully estimated or may no longer be needed. DOD officials told us that they intend to develop a more reliable cost estimate for the current realignment plan as environmental analyses and host nation negotiations are completed. According to DOD officials, the completed analyses and finalized host nation negotiations will provide the necessary information needed to complete a comprehensive cost estimate for the current realignment plan, but DOD officials have not determined when this estimate will be available. They informed us that a more detailed estimate for the Guam realignment would require the completion of an ongoing environmental impact statement on Guam, which is expected to be completed in 2014, and a separate environmental impact statement for the Joint Training Range Complex on Tinian and Pagan, which is expected to be completed in 2015. According to DOD officials, the Hawaii segment of the realignment would also require an environmental analysis before a more detailed cost estimate could be performed, but as of March 2013, they did not anticipate that this analysis will be completed until a date beyond 2018. Furthermore, DOD officials do not anticipate that a more detailed estimate for Australia will be available until host nation negotiations are complete, but no date has been determined for the conclusion of these negotiations. According to the Office of Management and Budget guidance containing best practices for cost estimating in the context of capital programming, early emphasis on cost estimating during the planning phase is critical to successful life cycle management of a program or project. This guidance recognizes that insufficient data and undefined risks are some of the challenges in estimating costs. It also notes that the cost estimating process is continuously updated, on the basis of the latest information available, to keep the estimate current, accurate, and valid. According to DOD officials, updated cost estimates will be developed when additional data from ongoing analyses are available and negotiations have been completed. However, until estimates are developed that address the seven cost components described in this report, DOD will not be able to provide Congress and other stakeholders with a reliable cost estimate to make informed funding decisions regarding the realignment of Marines. In April 2012, DOD announced that it would be revising its previous Marine Corps realignment plan; however, DOD has not yet completed two key planning mechanisms: an integrated master plan that synchronizes the various realignment initiatives with all geographic segments of the realignment and a construction support strategy. Although DOD has taken initial steps to begin the master planning effort for the realignment, DOD has not yet been able to fully develop an integrated master plan synchronizing the realignment with other DPRI initiatives and laying out the necessary facilities, progression of construction, unit movements, and costs to efficiently complete the realignment. Furthermore, DOD has not developed a strategy to support a potential surge of simultaneous Japanese construction projects associated with the other DPRI initiatives that may occur concurrently with the realignment of Marines. DOD is moving forward with the current realignment plan; however, Marine Corps officials are still determining how the projects associated with the Futenma Replacement Facility and Okinawa Consolidation are either related to or dependent on each other, and what effects these projects might have on the realignment of Marines. According to Marine Corps officials, uncertainties surrounding these initiatives and the effects on the realignment of Marines will exist until the government of Japan can provide a timeline for construction of related facilities. The Futenma Replacement Facility is an initiative in which Japan is constructing facilities and a runway to replace Marine Corps Air Station Futenma on Okinawa. Okinawa Consolidation is an initiative that involves returning land to Japan that is currently occupied by U.S. military installations and consolidating the remaining U.S. forces in less populated areas of Okinawa once Japan has constructed necessary replacement facilities. Under the 2006 Roadmap and the 2009 agreement, these initiatives, along with the realignment of Marines to Guam, were directly linked because progress on both Okinawa Consolidation and the realignment of Marines was contingent on Japan making a certain level of progress toward completion of the Futenma Replacement Facility and financial contributions to fund development on Guam. However, as part of the April 2012 statement, the Security Consultative Committee decided to delink these initiatives, effectively no longer requiring that progress on constructing the Futenma Replacement Facility be made before the other initiatives could commence. This change, in theory, allows all three initiatives to move forward independently of each other; however, in practical terms, the three initiatives still have elements that are linked, and each could ultimately affect the progress of the others. According to Marine Corps officials, since the three initiatives remain likely to be implemented concurrently, the proper sequencing of movements will influence whether the Marine Corps can maintain full operational capability and how smoothly the realignment can be accomplished. However, at the time of our review, it was too early to tell how each of the three initiatives will affect the progress or sequencing of the others. For example, Marine Corps officials said that many elements of Okinawa Consolidation will still be contingent on substantial progress— and in some cases completion—of the Futenma Replacement Facility at Camp Schwab. In the April 2012 statement, the Security Consultative Committee agreed that some segments of Okinawa Consolidation could start immediately, but the sequencing and timing of the more significant segments, such as those related to Marine Corps Air Station Futenma and Camp Schwab, were going to have to be determined through bilateral planning at a later date. At the time of our review, the two countries had not finalized a bilateral plan for Okinawa Consolidation; therefore, Marine Corps officials on Okinawa could only make assumptions about the details of the initiative. Furthermore, Marine Corps officials on Okinawa believed that unless facilities related to the realignment of Marines are constructed on Guam and Hawaii, significant elements of the Okinawa Consolidation could not progress. For example, certain Marine Corps units that currently reside at Camp Schwab on Okinawa would have to be able to relocate to either Guam or Hawaii before other units could move to Camp Schwab. Since the original realignment plan in 2006, congressional committees have been calling for DOD to submit a master plan or other information regarding the realignment, including costs and schedules for projects. However, DOD has not developed and finalized a master plan in support of the realignment. Congressional committees have expressed concern about the sweeping transformation in the Pacific, including concern regarding the practicality and economic viability of the realignment. In some instances, committees have sought a master plan or other information regarding the realignment of Marines before they will support the authorization or appropriation of certain funds to be used towards the implementation of the initiative. Most recently, the National Defense Authorization Acts for Fiscal Years 2012 and 2013 imposed restrictions on the use of funds to implement the realignment until DOD submits certain information to the congressional defense committees, including master plans. Previous GAO reports on the original realignment plan have stressed the importance of a master plan to provide Congress with a complete picture of facility requirements and associated costs so that it can make informed funding decisions. DOD was not able to provide a specific time frame including when it plans to complete an overarching master plan in support of the current realignment plan. Department of Defense, Integrated Master Plan and Integrated Master Schedule Preparation and Use Guide, ver. 0.9 (Oct. 21, 2005). take planners through “what if” scenarios to determine whether certain aspects of the realignment are actually executable. At the time of our review, Headquarters Marine Corps, which is conducting the master planning process for the realignment of Marines, had started to develop what officials called the first step of an integrated master plan. This first step, called the synchronization matrix, is described by Headquarters Marine Corps officials as an overarching scheduling tool that synchronizes the various realignment initiatives and graphically depicts how these realignments are interconnected and affected by both unit movements and facilities construction. Specifically, the synchronization matrix attempts to synchronize other DPRI initiatives— the Carrier Air Wing Move from Atsugi to Iwakuni, Okinawa Consolidation, and the Futenma Replacement Facility—with time frames and unit movements associated with the realignment of Marines. During the course of our review we spoke with Marine Corps officials from the Pentagon, Honolulu, and Okinawa, and heard conflicting views on the logical order of unit movements, the potential effects of the other DPRI initiatives on the realignment, and the time frames associated with each move. For example, Headquarters Marine Corps officials said they believed that fighting forces should be the first to leave Okinawa and relocate to Guam; however, Marine Corps officials we spoke to in Okinawa did not agree, stating that headquarters units should move first, followed by the fighting forces. According to Headquarters Marine Corps officials, the synchronization matrix will serve as a tool to address any conflicting internal views, determine how each initiative relates, and establish the appropriate sequencing of events needed to complete all realignment initiatives. Although the synchronization matrix is an important first step of the integrated master plan, it is still based on several assumptions regarding environmental analyses, facilities planning, funding availability, and, where applicable, host nation support, meaning that it’s always subject to change. As previously discussed, since environmental analyses and host nation negotiations still need to be completed, Marine Corps officials have not developed specific projects, facility and resource requirements, and costs for the realignment of Marines. According to Marine Corps officials, once the necessary analyses and negotiations are completed for each geographic segment—Guam, Hawaii, Australia, and the continental United States—of the realignment, DOD can begin to finalize master plans specific for each location. Each master plan, coupled with the synchronization matrix, will eventually formulate an integrated master plan for the realignment of Marines. According to Marine Corps officials, they recognize the importance of the master planning process for the realignment of Marines; however, they know master plans for some geographic segments of the realignment may take several years to produce. DOD officials stated that they can only estimate when an integrated master plan can be completed, but it will likely be beyond the 2018 time frame. Until then, Marine Corps officials stated that they will continue to update the synchronization matrix as geostrategic events change, and analyses and negotiations conclude. Still, without an integrated master plan that reflects not only the synchronization of DPRI initiatives with the realignment of Marines, but the projects, facility and resource requirements, and costs for all geographic segments of the realignment, congressional committees will not have a complete picture of the requirements and costs in order to make informed funding decisions. With several hundred projects associated with the remaining DPRI initiatives in mainland Japan and on Okinawa, and the likelihood that these initiatives will be implemented concurrently, it remains unclear whether DOD would be able to support a surge of this magnitude in construction in Japan. Although the April 2012 Security Consultative Committee statement did not establish definitive time frames or identify start and completion dates for the Futenma Replacement Facility, Okinawa Consolidation, or the realignment of Marines, DOD officials have stated that it is likely all three of these initiatives will proceed concurrently with the ongoing DPRI initiative to relocate a carrier air wing from Naval Base Atsugi to Marine Corps Air Station Iwakuni in mainland Japan. It is anticipated that the government of Japan will fund and construct the hundreds of projects associated with the remaining DPRI initiatives in mainland Japan and Okinawa; however, DOD will play a critical role in the design and construction oversight of these projects, assuring that each project is built to U.S. requirements and standards. Although DPRI- related construction at Iwakuni commenced in the last year, and only 10 projects in Okinawa have been completed to date, Marine Corps officials told us that DOD has encountered several challenges in supporting the design and construction of these projects; these challenges have led to delays in construction and, in some instances, generated additional costs to the United States. Several DOD officials we spoke with were concerned that if the other DPRI initiatives were to begin, DOD might not be in a position to support the surge in construction. DPRI construction projects in mainland Japan and Okinawa fall under the purview of the United States Army Corps of Engineers. As the DOD construction agent, the Army Corps of Engineers is responsible for working with each service to provide design and construction criteria to the government of Japan, and then design and construction surveillance and inspection to make sure that every DPRI project is being completed in accordance with the appropriate requirements and standards. However, according to Marine Corps officials, the Army Corps of Engineers has had a difficult time supporting the DPRI-initiatives at both at Iwakuni and at Okinawa. For example, Marine Corps officials at Iwakuni told us that the Army Corps of Engineers was both underfunded and understaffed to support initial Japanese design and construction efforts, and that, as a result, officials at Iwakuni circumvented the DPRI project development process in order to move project design and development along more quickly. According to these same officials, the government of Japan was prepared to provide as much funding as was necessary to complete construction of the facilities at Iwakuni as early as possible. Marine Corps officials also told us that in its fiscal year 2011 budget, the government of Japan allocated $700 million for the design and construction of facilities at Iwakuni (see fig. 9 for a photograph of the ongoing Japanese construction at Marine Corps Station Iwakuni). However, DOD and the Army Corps of Engineers were unprepared to support an effort of this magnitude. According to officials at Iwakuni, the Army Corps of Engineers did not have the resources to support an accelerated buildup of this magnitude by the government of Japan and questioned whether the Army Corps of Engineers could fully support the entire DPRI initiative at Iwakuni, which will eventually cost Japan nearly $3 billion to complete through 2017. The Marine Corps officials we spoke with in Okinawa agreed with the officials at Iwakuni, suggesting that the Army Corps of Engineers did not have the appropriate resources to oversee the design and construction of the DPRI-related projects in Okinawa. Although less than 20 DPRI-related projects have been designed and constructed to date, according to Marine Corps officials several of these projects had errors that led to unplanned budget expenditures by DOD. Marine Corps officials attributed most of the errors to a lack of communication of proper requirements to the government of Japan, and insufficient oversight during the construction process. For example, the government of Japan designed and constructed a bachelor enlisted quarters at Camp Schwab based on requirements provided by the Army Corps of Engineers and the Marine Corps. After construction was completed, the Marine Corps refused to accept the building, because it discovered that the building’s heating, ventilation, and air conditioning systems did not meet Marine Corps standards. Army Corps of Engineers officials said that they had not identified the discrepancy between the Marine Corps’ requirement and the government of Japan’s designs prior to construction, resulting in the error. After about a year, the Marine Corps corrected the problem by developing an ad hoc solution to the heating, ventilation, and air conditioning systems issue at its own expense in order to bring the building up to its standards. In another example, on the basis of requirements provided by the Marine Corps and Army Corps of Engineers, the government of Japan designed and constructed a new police station with a fenced-in area to house military working dogs at Camp Foster. After construction had been completed, it was discovered that the fenced-in area had not met DOD standards for housing military working dogs. As a result, the Marine Corps funded a corrective action. See figure 10 for a list of additional DPRI-related construction errors in Okinawa, as described by the Army Corps of Engineers. Army Corps of Engineers officials said they are aware of the setbacks associated with the DPRI-projects in both Iwakuni and Okinawa; however, Army Corps of Engineers officials told us that external factors associated with its DOD counterparts (U.S. Forces–Japan and the military services) may have helped to contribute to these setbacks. For example, Army Corps of Engineers officials told us that both a lack of proper master planning and circumventing the DPRI project development process at Iwakuni led to an expedited process that caused heightened risk. The expedited process, according to Army Corps of Engineers officials, made it difficult to catch and correct errors in the design phase of DPRI projects. Army Corps of Engineers officials expressed concern that if DPRI projects in Okinawa proceed in a similar, expedited manner as Iwakuni, similar problems will occur, including heightened risk and an inability to appropriately plan for resource requirements. In response to the setbacks related to DPRI projects in both Iwakuni and Okinawa, the Army Corps of Engineers has requested funding to increase staff at Iwakuni by 75 percent in fiscal year 2013 and have designed tools intended to prevent such setbacks in the future. For example, Army Corps of Engineers officials developed case studies of the errors and disseminated the information throughout their organization to better prepare their staff. In March 2011, Army Corps of Engineers personnel and senior leadership held an internal meeting to review construction project errors and discuss ways of improving their services. In addition, to ensure that they have sufficient resources and staffing to support the Marine Corps and the other services, Army Corps of Engineers officials have told us that they have conducted analyses every 6 months since July 2011 to forecast their future workload. These forecasts, according to Army Corps of Engineers officials, will continue in the future and include planning for any ramp-up needed in the immediate future. Although the Army Corps of Engineers has recognized and attempted to address the problems associated with the DPRI-related projects at Iwakuni and Okinawa, at the time of our review it had not yet developed a strategy, in conjunction with its DOD counterparts, to support a surge in Japanese construction that would require Army Corps of Engineers to support multiple, concurrent DPRI initiatives. Army Corps of Engineers officials acknowledged that there is potential for a surge in Japanese construction, so it is important that both the Army Corps of Engineers and the services be prepared for such an event. Furthermore, Army Corps of Engineers officials stated that the way the government of Japan funds and plans for construction makes DOD planning difficult. Specifically, according to DOD officials, the government of Japan funds projects on a year-by-year basis, giving DOD limited time to react in any given year. According to Army Corps of Engineers officials, the lack of a bilateral integrated master schedule for all DPRI initiatives between the United States and Japan makes it very difficult to forecast its resource requirements more than 1 year at a time. However, Army Corps of Engineers officials agreed that they must still work to develop a strategy to deal with the possibility that additional resources will be needed to support any surges in Japanese construction up to 5 years in the future. In previous work, we identified key elements that should be included in a support strategy: Goals, subordinate objectives, activities, and performance measures set clear desired results and priorities, specific milestones, and outcome-related performance measures while giving implementing parties flexibility to pursue and achieve those results within a reasonable time frame. Organizational roles, responsibilities, and mechanisms for coordinating their efforts identify the relevant components. The strategy clarifies the components’ relationships in terms of leading, supporting, and partnering. Resources, investments, and risk management identify, among other things, the sources and types of resources and investments associated with the strategy and where those resources and investments should be targeted. Without a strategy to address future construction surges, the Army Corps of Engineers and the services will not have a clear picture of how the design and construction process should be handled moving forward, or the resources needed to support the effort. Such a strategy would allow both the Army Corps of Engineers and service officials—mainly the Marine Corps—to establish a process that assigns specific responsibilities, with time frames, to participating parties and assist the Army Corps of Engineers in identifying funding needed to support construction projects being conducted by both the United States and Japan. Without a strategy, DOD may not be in a position to successfully support upcoming DPRI-related projects and may face further planning and construction errors that have in the past led to unplanned funding needs and delayed completion schedules. Delayed completion schedules may ultimately affect the implementation of the realignment of Marines or other DPRI initiatives. DOD has taken some steps to plan for sustaining its forces on both Okinawa and Guam until the Marine Corps realignment is implemented and consolidation initiatives on Okinawa are complete, but it has not yet fully identified what will need to be done to sustain the facilities at these locations and what it will cost for the immediate future. Facility maintenance and replacement for installations on Okinawa that were identified to be returned to the government of Japan have been limited for many years. As a result, many facilities have reached the end of their useful life and are in a state of disrepair. Specifically, DOD has identified the sustainment needs on Okinawa for Marine Corps Air Station Futenma; however, it has not identified sustainment needs for other facilities on the island that are expected to be eventually returned to Japan or fully planned for the sustainment of its family housing units on Okinawa. In Guam, DOD has begun to develop initial sustainment plans for its family housing units; however, these plans have not been updated to reflect the Marine Corps’ current realignment plan. According to Marine Corps officials on Okinawa, many of the facilities supporting U.S. forces there are old, and are in need of resources to sustain them. We observed multiple facilities in various states of disrepair during our site visit to the island. For example, we observed facilities at Marine Corps Air Station Futenma that had been shuttered because service officials deemed them too dangerous to occupy; in some cases, there was so much mold growth that an extensive removal process would be necessary before the facilities could be occupied again. In addition to mold removal, some of these facilities would require other improvements or complete renovation before they could be used again. Marine Corps officials said that in some of the Marine Corps installations south of Kadena Air Base several facilities either are nearing or have already exceeded their 50-year service life and will need to be either renovated or replaced. The most significant examples of aging and deterioration that we observed were at Marine Corps Air Station Futenma, the installation that has been at the center of controversy on Okinawa for nearly two decades. At Futenma, at several facilities currently in use, we observed concrete ceilings, staircases, and walls that showed evidence of deterioration, ranging from superficial cracks on the exterior of the structure to severe fracturing that rendered the facility unsafe for occupancy. Figure 11 shows some examples of deteriorated conditions at two locations on Okinawa. On Okinawa, DOD conducts routine maintenance and repair to keep its facilities in good working order over a 50-year service life and has historically relied on the government of Japan to fund the construction of new U.S. facilities to replace deteriorating, obsolete structures. The government of Japan will demolish these structures and construct new ones through the Japan Facilities Improvement Program. The program allows DOD to identify necessary projects from across the services and installations in Japan, rank them by priority, and submit them to the government of Japan for funding consideration. In recent years, sustainment funding and facility replacement by the government of Japan has been limited because delays in the Marine Corps realignment, construction of the Futenma Replacement Facility, and Okinawa Consolidation have left unclear what facilities will need to be available and sustained—and for how long. Since 2006, in the face of the uncertainties surrounding these initiatives, both DOD and the government of Japan have questioned how they will proceed with maintaining some existing facilities on Okinawa. According to Marine Corps officials, because of the uncertainties surrounding the timing of the various realignment initiatives, the government of Japan has been hesitant to invest in constructing new facilities on Okinawa. Figure 12 shows a steady decline in funding since 2002 under the Japan Facilities Improvement Program. In 2002, Japan provided approximately ¥75 billion, or approximately $834 million. Of that ¥75 billion, approximately ¥6 billion, or $67 million, was for Marine Corps installations in Okinawa. The figure shows a significant decline for funding in 2006, the year the 2006 Roadmap was developed outlining the realignment initiatives. The U.S. dollar figures are based on the January 2013 foreign exchange rate of 89.86 to indicate an approximate value. The Air Force, which is the executive agent for military family housing on Okinawa, had 7,823 family housing units in its active inventory as of February 2013. In February 2013, approximately 75 percent of its housing units were occupied. According to DOD officials, similar to other DOD facilities on Okinawa, many of the family housing units are old and in need of renovation or replacement. At Kadena Air Force Base, we observed several vacant housing units that had not been renovated and showed significant mold damage. We also observed several recently renovated units. According to Air Force officials, approximately 58 percent of the military family housing units on Okinawa have been assessed as inadequate. However, inadequate units are not necessarily uninhabitable. Air Force officials told us that one reason these units have been assessed as inadequate is that they were built to Japanese standards, which can differ significantly from Air Force standards. For example, Japanese accommodations tend to be smaller than housing units constructed by the services, so that kitchen, bathroom, and bedroom space may be more limited than it is in family housing units at other DOD locations, although the units are still viewed as safe to occupy. The Air Force developed a family housing master plan in November 2011 to provide a corporate, requirements-based housing investment strategy that integrates traditional construction funding and private sector financing. This master plan covered the period of fiscal years 2012 through 2017 and estimated that the total funding required to maintain adequate military family housing and bring all housing-related infrastructure on Okinawa up to modern DOD standards would be $690.7 million; this figure includes $131.7 for military construction and $559.1 for operation and maintenance. On the basis of its analyses and master plan, the Air Force has been working to modernize or replace 6,988—89 percent—of its family housing units on Okinawa through various initiatives. Under its Post-Acquisition Improvement Program, the Air Force intends to renovate 3,747 housing units; these renovations are considered to be more extensive than minor repair and maintenance work and are intended to ensure that the renovated unit remains habitable for their full service lives. The government of Japan planned to build 1,770 new units under the 1996 Special Action Committee on Okinawa report and another 1,471 under the Japan Facilities Improvement Program, but as of February 2013 it had built only 38 percent of the first group and 13 percent of the second. (See table 5 for list of family housing projects and status as of February 2013.) The Air Force’s current master plan and supporting analyses were based on the premise that approximately 8,600 Marines and 9,000 dependents would relocate to Guam by 2014—as anticipated under the original realignment plan. However, under the current realignment plan, DOD is planning to move only 4,700 Marines and their dependents to Guam. Air Force officials informed us that because many of the replacement projects funded by the government of Japan are on hold, they have developed a “bridging strategy” to fund minor renovations for a select number of units, in order to meet DOD’s housing needs on Okinawa until additional units are constructed or renovated. As a result of the deterioration of its facilities and replacement projects being on hold, Marine Corps officials have stated that a sustainment funding strategy is going to be critical to maintain some of its infrastructure that has deteriorated past its useful life for the immediate future. Marine Corps officials have stated that the uncertainties regarding the timing of the realignment initiatives have made it difficult for them to determine what sustainment projects will be needed for facilities on Okinawa, because they do not know when the realignment initiatives will be completed and therefore do not know how long the existing facilities will continue to be used. Marine Corps officials in Okinawa informed us that construction and sustainment projects described in the base master plans for each of the Marine Corps installations on Okinawa have not been updated to take into account delays in the realignment and consolidation plans, and that these base master plans remain in draft form. DOD has not developed any updated sustainment plans for these installations as part of their base master plans—with the recent exception of Marine Corps Air Station Futenma. Many of the facilities on Futenma are 30 to 50 years old and have degraded over time due to limited investment in sustainment and the harsh tropical and corrosive saltwater environment of Okinawa. In response to the April 2012 Security Consultative Committee statement, and the delay in the Japanese construction of a replacement facility at Camp Schwab, the United States began to identify sustainment needs for Futenma. In April 2012, in a preliminary estimate, the Marine Corps identified approximately $165 million in funding needed to sustain Futenma for the next 10 years. In October 2012, the Marine Corps developed a draft sustainment plan after performing an assessment of the facilities on Futenma to identify, validate, and prioritize its sustainment needs. This plan included a priority list of repair and maintenance projects, and new construction projects needed to ensure that Futenma could continue to meet operational and training demands until the Futenma Replacement Facility is constructed and fully operational. The plan prioritizes facility repair and renovation projects to address critical mission and quality-of-life requirements. U.S. Forces–Japan officials said that DOD and the government of Japan are negotiating a possible cost- sharing arrangement, and that as of February 2013, both sides had agreed to a list of projects that the government of Japan had submitted to its legislature for funding consideration. Marine Corps officials said that, without sustainment investment, the installation’s ability to support operations would be put at risk. See Department of Defense Instruction 4165.70, Real Property Management, para. 6.1 (Apr. 6, 2005). capture facility requirements and propose solutions to meet those requirements from the options available. The Unified Facilities Criteria also indicates that a facility’s master plan will be revised and updated to maintain its relevance as a useful planning and management tool. Furthermore, guidance on DOD housing management indicates that DOD housing—both family and unaccompanied—is to be operated and maintained to a standard that protects the facilities from deterioration and provides safe and comfortable living places for servicemembers and their DOD policy is to rely on the private sector as the primary dependents. source for family housing for personnel stationed at locations within the United States. DOD guidance indicates that in overseas locations where servicemembers are given an overseas housing allowance to reimburse them for the cost of housing, the policy of relying on off-base housing first is not mandatory, but should be encouraged where appropriate.guidance also indicates that for installations on U.S. soil, to determine the need for military housing at the installation, the military services must perform a housing requirements and market analysis to determine whether the adjacent community can accommodate the housing needs of the military and must identify the minimum housing requirement—the minimum level of housing needed on base to allow the installation to effectively accomplish base missions. If the base is located overseas, the military service may determine the need and applicability of the housing requirements, and market analysis is not mandatory. DOD policy also states that master plans for housing should address, among other things, military housing requirements. The Air Force’s current housing requirements and market analysis for Okinawa do not reflect current plans. While the Marine Corps has provided the Air Force with the number of Marines that will remain on Okinawa following the full implementation of all realignment initiatives, the Air Force office responsible for family housing on Okinawa has not been able to update its housing requirements and market analysis, and housing master plan, because the Marine Corps has yet to provide the Air Force with the incremental schedule and unit movements associated with the realignment initiatives on Okinawa. According to Marine Corps officials, the Marine Corps cannot provide incremental data until bilateral negotiations on the Okinawa Consolidation initiative and the subsequent master plan are finalized. Air Force officials told us that while they plan to update their housing requirements and market analysis in the summer of 2013, this will only cover the next 5 years. Considering that the realignment initiatives in Okinawa have no definitive timelines and may potentially take decades to complete, the Air Force office responsible for family housing on Okinawa told us that it needs a better understanding of when facilities will be constructed and units moved over the next 10-15 years. This incremental data will help Air Force officials conduct housing analyses and develop housing plans based on more short-term and intermediate housing requirement needs on Okinawa. Until the Air Force knows what the incremental Marine Corps housing requirements will be for all phases of the realignment initiatives in Okinawa, it will not have sufficient information to project housing demand and assess the housing sustainment cost for the current realignment plan. As a result, the Air Force will not be able to determine how to sustain its housing inventory on Okinawa in a cost-effective manner, which could lead to overinvesting in certain housing areas and underinvesting in others. Without identifying sustainment needs for all its infrastructure that will be used until the realignment and consolidation actions are implemented, DOD risks not having the information necessary to make informed decisions about maintaining its infrastructure at an acceptable level to carry out its mission. DOD is in the process of developing plans to meet the housing needs for U.S. forces on Guam, but these plans will not take into account the housing requirements associated with the current realignment. According to Joint Region Marianas officials, Joint Region Marianas is responsible for overseeing all military housing on Guam, but Naval Base Guam and Naval Support Activity, Andersen, are each responsible for implementing their own housing operations. Joint Region Marianas and its installations face many of the challenges associated with sustaining aging housing units that we reported are being experienced by the Air Force on Okinawa. A June 2012 assessment of housing on Naval Base Guam found housing deficiencies such as mold, broken windows, and inoperative fans and ventilation systems. While in Guam, we observed several unoccupied housing units that showed significant pest infestation and mold growth in their interiors and heavy vegetation and mold growth on their exteriors. Figure 13 shows pictures of some of the housing units we observed. There is currently a surplus of military family housing on Guam. As of January 2013, approximately 67 percent of available military housing on Guam—1,577 housing units—are occupied, and 785 units remain vacant. In addition, there are 311 additional housing units that have been classified as inactive, which according to Joint Region Marianas officials, means that those units are viewed as uninhabitable and are no longer considered part of the active housing inventory. Joint Region Marianas officials said that the low occupancy rate in their housing units can be attributed to several factors. First, since many servicemembers are eligible to live off base, many choose to do so to enjoy the benefits of living in the local community. Many servicemembers also choose to reside in private housing to take advantage of the high overseas housing allowance and utilities stipend. According to the Defense Finance and Accounting Service, in 2011 approximately 3,800 personnel from the Air Force, Army, Navy and Marine Corps who were stationed on Guam received a combined total of $96.1 million in overseas housing allowance. Under the auspices of Joint Region Marianas, both Naval Base Guam and Andersen Air Force Base have developed housing plans that seek to address the low occupancy rate in their on-base housing and make more effective and efficient use of their housing inventories. To increase the occupancy rate, Joint Region Marianas officials are assessing several alternatives, including revising their housing policy to make more DOD- civilians eligible to reside in military housing, requiring more servicemembers to reside in military housing, and reducing the number of housing units in the inventory. However, Joint Region Marianas officials informed us that, until the Marine Corps completes its requirements determination, arrival dates, and housing policy, none of their alternatives or plans will consider housing requirements associated with the current Marine Corps realignment plan that will eventually send nearly 5,000 Marines to Guam. Moreover, they said that they do not have plans to set aside and sustain existing facilities to support the realignment of Marines. Joint Region Marianas officials informed us that, because of the uncertainty regarding when Marines will begin relocating to Guam, they had decided to proceed with developing housing plans independently of the Marine Corps realignment initiative. They explained that delaying action on meeting their housing needs could adversely affect their ability to provide housing to servicemembers currently stationed on Guam. Initially, Joint Region Marianas was making preparations to support the Marine realignment. The Department of the Navy had conducted a housing requirements and market analysis to identify the demand for military housing on Guam based on the Marine Corps requirements associated with the previous realignment plan. Joint Region Marianas had identified surplus housing units that could have been used to provide transitional housing for Marines relocating to Guam. However because of the delay of the realignment and reduced number of troops relocating, the Marine Corps data that the Navy used when it conducted its housing requirements and market analysis were no longer valid. Because no firm requirements or a time frame for the realignment are available, Joint Region Marianas officials are currently only addressing the housing issues directly relevant to its immediate needs. DOD data on the cost of maintaining vacant and inactive housing on Guam are limited. Joint Region Marianas officials said that they have yet to determine the average cost of maintaining a vacant or inactive housing unit on Guam. Officials from Andersen Air Force Base estimated that the average annual cost of maintaining an inactive unit on the Air Force Base, including the cost of providing electrical power and ground maintenance, is approximately $4,500 per house. However, Joint Region Marianas officials said that similar data have not been calculated for Navy housing on Guam. Joint Region Marianas officials told us that Navy housing units on Guam can be found both on the military installations and embedded in the local community and that, because different types of costs are involved for the two types of housing units, it becomes difficult to determine an average cost for maintaining inactive housing units. For example, officials stated that certain services, like fire protection and security, for Navy housing units located on military installations would be incorporated into the base operating support expenses for that installation; however, for certain Navy housing units embedded in the local community, officials stated that Joint Region Marianas would pay for these services from accounts other than base operating support, and it may be difficult to isolate and merge this data. DOD guidance on economic analysis for decision making states that the purpose of such an analysis is to give decision makers insight into economic factors bearing on accomplishing a project’s objectives and that alternatives must be fully investigated and a determination made on whether an alternative satisfies the functional requirements for the project. This guidance also indicates that, as part of assessing the costs and benefits of alternatives, an economic analysis should include all measurable costs and benefits to the federal government that are incident to achieving the stated objectives of the project. However, without the Marine Corps housing requirements being considered and the total costs of maintaining vacant and inactive housing units known, Joint Region Marianas’ planned assessment of alternatives will not fully measure the costs and benefits to the federal government of its housing plans. As a result, decision makers will not have sufficient information to identify an investment strategy that addresses both Joint Region Marianas’ current housing needs and the Marine Corps’ housing requirements once the realignment is completed. DOD believes that rebalancing and strengthening its posture in the Asia- Pacific region offers many advantages, including reassuring allies and partners in the region of the United States’ commitment and shaping the security environment, while also providing forward capabilities to deter and defeat aggression. However, in an era of significant budgetary pressures and competition for resources, it is important to conduct detailed planning, supported by comprehensive cost information, to ensure that DOD is making the most efficient use of its resources. Although DOD has revised its plan to relocate Marines off Okinawa, it has yet to identify the total costs, requirements, or sustainability of a move that will realign Marine Corps forces throughout the Pacific. In our assessment of DOD’s preliminary estimate for the realignment, we found that DOD’s estimate is not reliable because it omits potentially critical cost elements such as mobility support and risk parameters tied to the assumptions, and lacks detailed information on requirements for several key cost components that are needed to capture all costs related to the realignment. According to DOD officials, specific requirements and their associated costs cannot be developed for each cost component until the necessary environmental analyses and host nation negotiations have been completed. Still, we found that DOD did not include some up-front practices that could have provided a more reliable estimate and could have been done despite the fact that the environmental analyses and host nation negotiations are not complete. Office of Management and Budget guidance indicates that it is a best practice to continuously update the cost estimating process to keep estimates current, accurate, and valid; and DOD’s overview of its fiscal year 2013 budget states that DOD cannot continue the practice of starting programs that prove to be unaffordable and that it will work to achieve program affordability by working to ensure programs start with firm cost goals in place, appropriate priorities set, and necessary trade-offs made to keep programs within affordable limits. Without comprehensive cost estimates developed for the realignment plan, DOD will be hampered in achieving its affordability goal of not starting a program without firm costs goals in place. DOD acknowledges that it will be 2018 or later before an integrated master plan can be completed to provide Congress with the necessary information it needs on all of the specific projects, requirements, schedules, and costs to aid it in its decision making regarding the realignment of Marines in the Pacific. However, DOD has made a first step in capturing some of its planning information including integrated schedules of its planned actions in its synchronization matrix. While we acknowledge that this type of information will periodically change as environmental analyses and negotiations are completed, and plans start to be implemented, this type of information would provide Congress with current plans in the interim until such time that the integrated master plan can be completed. Furthermore, it is unknown what the government of Japan’s long-term construction schedule will be for building the infrastructure to complete its plans for Iwakuni and Okinawa. DOD currently has not developed a strategy to identify the resources it needs to assist with the development and oversight of these projects that may involve a surge in concurrent construction. Without a strategy, DOD will not have the safeguards in place to help ensure facilities are being built to standard and that problems that already existed on a smaller scale are not magnified. Finally, uncertainties surrounding this realignment have disrupted planning for current facilities and family housing on Okinawa, and family housing on Guam, leaving the potential for hundreds of millions of dollars in unplanned sustainment projects in the future. Without developing comprehensive cost estimates and further planning to support the realignment, DOD risks requesting realignment funds without fully determining requirements, and Congress may be asked to fund requirements without knowing the full cost. Furthermore, without developing updated sustainment plans on both Okinawa and Guam, DOD lacks reasonable assurance that it will have adequate facilities to support operations and the lives, health, and safety of servicemembers and their families. To provide DOD and Congress with more reliable information to inform investment decisions associated with the realignment of Marines and U.S. military posture in the Pacific, we recommend that the Secretary of Defense update the current cost estimate to include additional estimates for mobility support, and additional analysis that would quantify the risk impacts and parameters to account for its various assumptions changing. Furthermore, as appropriate environmental analyses and host nation negotiations are completed, update the estimate with comprehensive cost estimates (as identifiable) that factor in and include the following seven cost components associated with the current realignment plan: Guam Physical Layout and Requirements; Housing Requirements on Guam; Requirements to Upgrade Utilities and Infrastructure on Guam; Joint Training Range Complex Requirements including associated environmental mitigation in the Northern Marianas; Marine Corps Requirements for Australia; Marine Corps Requirements for Hawaii and Other U.S. Locations; Mobility requirements to support the current realignment plan to conduct routine operations, training, and any contingency situations. To provide DOD and Congress with sufficient information to make informed decisions about the sequencing of projects supporting the realignment of Marines and the interdependent projects on Okinawa and about the timing for the funding needed to simultaneously support these projects and those already planned on mainland Japan, we recommend that the Secretary of Defense take the following two actions: As the master planning process continues over the next several years, require the Secretary of the Navy to develop annual updates on the status of planning efforts for appropriate congressional committees until such time as master plans are completed for each geographic segment of the realignment. These updates should include, but not be limited to, providing congressional committees with up-to-date information on the status of initiatives, identified requirements and time frames, and any updated cost information linked to specific facilities or projects. Direct the Secretary of the Army to require the Army Corps of Engineers to coordinate with appropriate military service officials involved in the planning and management of DPRI projects in Japan, including U.S. Forces–Japan, Marine Corps Installations Pacific, and Marine Corps Headquarters, to develop a strategy to identify how the design and construction process of DPRI projects should be handled moving forward and the necessary resources needed to support any surge in construction associated with posture-related initiatives in both Iwakuni and Okinawa. To aid DOD and Congress in obtaining sufficient information to make prudent investment decisions for the sustainment of U.S. forces on Okinawa and Guam while implementing the planned movements associated with the realignment of Marines and the consolidation efforts on Okinawa, we recommend that the Secretary of Defense take the following three actions: Direct the appropriate service officials to update Okinawa installation master plans to include sustainment requirements and the costs to sustain the U.S. presence on Okinawa until the Marine realignment and Okinawa consolidation efforts are completed. At a minimum, these plans should identify both short-term needs and long-term needs to account for the uncertainty regarding the time needed to implement the realignment and consolidation initiatives on Okinawa. Direct appropriate service officials to provide, as they become available, annual master schedule and unit movement updates associated with the realignment initiatives on Okinawa to the appropriate Air Force officials. These updates should include any updated housing requirements such as the demographics of Marine families required to be housed on Okinawa during the future phases of the realignment initiatives on Okinawa, thus allowing the appropriate Air Force officials to perform up-to-date assessments and develop housing investment strategies reflecting the updated schedule and housing requirements. Direct the Secretary of the Navy to conduct an economic analysis to include assessing the costs of maintaining vacant housing on Guam to arrive at an informed decision weighing the cost of maintaining or renovating this housing versus the construction of new facilities to support the requirements for the Marine Corps realignment to Guam. In written comments on a draft of this report, DOD fully concurred with five of our recommendations, partially concurred with two recommendations, and stated it would work with DOD components to implement the recommendations. While DOD agreed with the content of the report and the recommendations, the department expressed concern with the report’s title. DOD believes the title suggests the department currently has the ability to produce comprehensive cost estimates and complete planning for the realignment initiatives but has not done so. Our report title Defense Management: More Reliable Cost Estimates and Further Planning Needed to Inform the Marine Corps Realignment Initiatives in the Pacific conveys that more reliable estimates and comprehensive planning will be needed to inform decision makers. We acknowledged in our report that DOD will not be in a position to provide comprehensive cost estimates and complete planning documentation for the realignment to Congress until the environmental studies and host nation negotiations have been completed. However, it is important to ensure that Congress is aware that current cost estimates provided to them to date are not reliable because they are incomplete for the reasons stated above and in our report. As a result, we did not change the title of the report. DOD partially concurred with our recommendation to update its current cost estimate for the realignment to include additional estimates for mobility support, and additional analysis that would quantify the risk impacts and parameters to account for its various assumptions changing. DOD stated that it is in the process of responding to requirements contained in Section 2832 of the National Defense Authorization Act for Fiscal Year 2013, which also requires an assessment of the necessary strategic and logistical resources. However, the provision does not specifically require DOD to include risk impacts and parameters. Furthermore, DOD stated that estimates for mobility support will not be available until the department completes the required environmental planning documents. As our report states, we acknowledge that comprehensive estimates for most costs tied to the realignment cannot be completed until appropriate environmental analyses and host nation negotiations are complete. However, we believe DOD should update its current estimate to include risk parameters to produce a more reliable cost estimate by accounting for potential cost fluctuations (if its assumptions change) and include an initial estimate for mobility support, a cost that DOD officials told us could be considerable. DOD concurred with our recommendation to update the cost estimate for the realignment with comprehensive estimates as environmental analysis and host nation negotiations are completed and consequently more specific data becomes available on seven specific cost components including Guam physical layout and requirements; housing and utilities infrastructure on Guam; joint or Marine Corps training and other requirements in the Northern Marianas, Australia, and Hawaii; and mobility requirements. DOD stated that it plans to identify and incorporate comprehensive cost estimates as they become available upon completion of necessary environmental planning documents. DOD also concurred with our recommendation that the Secretary of Defense direct the Secretary of the Navy to provide annual updates on the status of master planning efforts to the appropriate congressional committees, until such time as master plans are completed for each geographic segment of the realignment. DOD partially concurred with our recommendation to require the Army Corps of Engineers to coordinate with appropriate military service officials to develop a strategy to identify how the design and construction process of Defense Policy Review Initiative projects should be handled moving forward, and the necessary resources needed to support any surge in construction in both Iwakuni and Okinawa. DOD noted that it would take these steps, but stated that the effort necessarily relies upon a detailed master plan that has been coordinated among several organizations within DOD in order to identify the necessary resources to support a surge in construction. We agree that developing a master plan is the first step, and coordination with the various DOD organizations will be required to complete this task. DOD concurred with our recommendation to update Okinawa installation master plans to include sustainment requirements and the costs to sustain the U.S. presence on Okinawa until the Marine realignment and Okinawa consolidation efforts are completed. DOD stated that the completion of the bilateral Okinawa Consolidation Plan in April 2013 removed much uncertainty and will allow the development of more detailed master plans for each camp. DOD concurred with our recommendation to provide, as they become available, annual master schedule and unit movement updates associated with the realignment initiatives on Okinawa to the appropriate Air Force officials including updated housing requirements and demographics of Marine Corps families required to be housed on Okinawa. DOD stated it will direct U.S. Pacific Command and the Marine Corps to provide current fiscal and unit movement data to the Air Force, and update as plans are reviewed and revised. DOD concurred with our recommendation to conduct an economic analysis to include assessing the costs of maintaining vacant housing on Guam. DOD stated that the Navy is conducting a housing market analysis to establish a baseline for long-term military family housing requirements on Guam and that, once the baseline requirements are established, the Navy will conduct a cost/benefit analysis for addressing new requirements related to the Marine Corps realignment. We also provided the Department of State with a draft of this report for official comment, but it declined to comment since the report contains no recommendations for the Department of State. DOD and State provided technical comments separately that were incorporated into the report as appropriate. DOD’s written comments are reprinted in appendix VI. We are sending copies of this report to appropriate congressional committees, the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; the Commandant of the Marine Corps; the Secretary of State; the Director, Office of Management and Budget; and appropriate organizations. In addition, this report will be available at no charge on our website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-4523 or [email protected]. Contact points for our Offices of Congressional Affairs and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. To evaluate defense posture initiatives in the Asia-Pacific region, we interviewed and collected information from various Department of Defense (DOD) officials including: Office of the Under Secretary of Defense (Policy); Department of the Army; Department of the Air Force; Department of the Navy; U.S. Pacific Command and its Army, Navy, Marine Corps, and Air U.S. Forces–Japan and its Army, Navy, Marine Corps, and Air Force U.S. Army Corps of Engineers and its Japan Engineering District; Joint Region Marianas; and Joint Guam Program Office. We conducted site visits to Yokota Air Base and Marine Corps Air Station Iwakuni in mainland Kadena Air Base, Camps Schwab and Kinser, and Marine Corps Air Station Futenma on Okinawa; Andersen Air Force Base and several Naval installations on Guam; and the proposed training locations on Tinian. Specifically, to determine the extent to which DOD has developed comprehensive plans and cost estimates for the realignment of Marines, we interviewed appropriate DOD and State Department officials, collected plans and cost estimates related to this initiative, and applied the best practices included in the GAO Cost Estimating and Assessment Guide to our assessment of the available data. In addition to the agencies listed above, we interviewed and collected data from officials in the Office of the Under Secretary of Defense (Comptroller), Director, Cost Assessment and Program Evaluation; the Joint Staff; the Office of Economic Adjustment; U.S. Transportation Command; the Defense Logistics Agency; the Naval Center for Cost Analysis; and U.S. Embassy in Tokyo, Japan, and U.S. Consulate in Naha, Okinawa. For this review, we collected DOD plans and cost estimates associated with the original and current Marine realignment plans, DOD budget data on U.S. projects related to the Defense Posture Review Initiative (DPRI) and the Marine realignment, budget data on DPRI and Host Nation Support expenditures provided to DOD by the government of Japan, military base master plans and housing requirements analyses, and other relevant documentation. To assess the comprehensiveness of DOD’s realignment cost estimate, we analyzed DOD’s plans, cost analyses, and cost estimating process and compared them with the best practices included in the GAO Cost Estimating and Assessment Guide. To determine the extent to which DOD has planned for and synchronized other U.S. defense posture movements in Okinawa and Japan to coincide with the Marine Corps realignment, we reviewed planning documentation associated with these posture movements for completeness and coordination. We interviewed and collected relevant planning documentation from officials in the DOD offices listed above. We compared the data we received from each component within the Marine Corps to one another, and compared the data we collected from U.S. Pacific Command, U.S. Forces–Japan, the Joint Guam Program Office, and the Office of the Secretary of Defense to determine the status of planning consistencies and synchronization. We evaluated these plans with criteria established in relevant DOD guidance and Key Elements for Developing a Strategy. To determine the extent to which DOD has identified a plan to sustain its current forces on Okinawa and Guam, we interviewed the DOD officials listed above and conducted site visits where we observed the conditions of facilities and housing, collected appropriate planning documentation and cost data, and assessed the data against GAO cost estimating guidance and DOD planning guidance. We interviewed officials to identify DOD requirements and plans for sustaining U.S. forces on Okinawa until realignment efforts are completed. We collected sustainment planning documentation, base master plans, and historical host nation support and U.S. sustainment cost data for Okinawa and compared them to GAO cost estimating guidance and DOD guidance on installation master planning to determine the extent to which DOD has planned for the sustainment of U.S. forces on Okinawa until realignment efforts are completed. We interviewed officials to determine the extent to which they have planned for the sustainment of family housing on Guam, reviewing planning documentation and analyzing current cost data collection methodologies used by the Air Force and the Navy. We compared sustainment plans to criteria established by relevant DOD guidance on economic analysis for decision making and installation master planning. To determine the reliability of the numerical data provided to us by DOD organizations, we collected information on how the data was collected, managed, and used through interviews and a survey provided to relevant DOD officials. By assessing this information against GAO data quality standards, we determined that the data presented in our findings were sufficiently reliable for the purposes of this report. We conducted this performance audit from November 2011 to March 2013, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. National Defense Authorization Act for Fiscal Year 2013SEC. 2831. CERTIFICATION OF MILITARY READINESS NEED FOR A LIVE FIRE TRAINING RANGE COMPLEX ON GUAM AS CONDITION ON ESTABLISHMENT OF RANGE COMPLEX. A Live Fire Training Range Complex on Guam may not be established (including any construction or lease of lands related to such establishment) in coordination with the realignment of United States Armed Forces in the Pacific until the Secretary of Defense certifies to the congressional defense committees that there is a military training and readiness requirement for the Live Fire Training Range Complex. SEC. 2832. REALIGNMENT OF MARINE CORPS FORCES IN ASIA- PACIFIC REGION. (a) RESTRICTION ON USE OF FUNDS FOR REALIGNMENT.— Except as provided in subsection (c), none of the funds authorized to be appropriated under this Act, and none of the amounts provided by the Government of Japan for construction activities on land under the jurisdiction of the Department of Defense, may be obligated to implement the realignment of Marine Corps forces from Okinawa to Guam or Hawaii until each of the following occurs: (1) The Commander of the United States Pacific Command provides to the congressional defense committees an assessment of the strategic and logistical resources needed to ensure the distributed lay-down of members of the Marine Corps in the United States Pacific Command Area of Responsibility meets the contingency operations plans. (2) The Secretary of Defense submits to the congressional defense committees master plans for the construction of facilities and infrastructure to execute the Marine Corps distributed lay- down on Guam and Hawaii, including a detailed description of costs and the schedule for such construction. (3) The Secretary of the Navy submits a plan to the congressional defense committees detailing the proposed investments and schedules required to restore facilities and infrastructure at Marine Corps Air Station Futenma. (4) A plan coordinated by all pertinent Federal agencies is provided to the congressional defense committees detailing descriptions of work, costs, and a schedule for completion of construction, improvements, and repairs to the non-military utilities, facilities, and infrastructure, if any, on Guam affected by the realignment of forces. (b) RESTRICTION ON DEVELOPMENT OF PUBLIC INFRASTRUCTURE.— If the Secretary of Defense determines that any grant, cooperative agreement, transfer of funds to another Federal agency, or supplement of funds available in fiscal year 2012 or 2013 under Federal programs administered by agencies other than the Department of Defense will result in the development (including repair, replacement, renovation, conversion, improvement, expansion, acquisition, or construction) of public infrastructure on Guam, the Secretary of Defense may not carry out such grant, transfer, cooperative agreement, or supplemental funding unless such grant, transfer, cooperative agreement, or supplemental funding is specifically authorized by law. (c) EXCEPTIONS TO FUNDING RESTRICTION.—The Secretary of Defense may use funds described in subsection (a)— (1) to complete additional analysis or studies required under the National Environmental Policy Act of 1969 (42 U.S.C. 4321 et seq.) for proposed actions on Guam or Hawaii; (2) to initiate planning and design of construction projects at Andersen Air Force Base and Andersen South; and (3) to carry out any military construction project for which an authorization of appropriations is provided in section 2204, as specified in the funding table in section 4601. (d) DEFINITIONS.—In this section: (1) DISTRIBUTED LAY-DOWN.—The term ‘‘distributed laydown’’ refers to the planned distribution of members of the Marine Corps in Okinawa, Guam, Hawaii, Australia, and possibly elsewhere that is contemplated in support of the joint statement of the United States–Japan Security Consultative Committee issued April 26, 2012, in the District of Columbia (April 27, 2012, in Tokyo). (2) PUBLIC INFRASTRUCTURE.—The term ‘‘public infrastructure’’ means any utility, method of transportation, item of equipment, or facility under the control of a public entity or State or local government that is used by, or constructed for the benefit of, the general public. (e) REPEAL OF SUPERSEDED LAW.—Section 2207 of the Military Construction Authorization Act for Fiscal Year 2012 (division B of Public Law 112-81; 125 Stat. 1668) is repealed. National Defense Authorization Act for Fiscal Year 2012SEC. 2207. GUAM REALIGNMENT. (a) RESTRICTION ON USE OF FUNDS.—Except as provided in subsection (c), notwithstanding any other provision of law, none of the funds authorized to be appropriated under this Act, and none of the amounts provided by the Government of Japan for military construction activities on land under the jurisdiction of the Department of Defense, may be obligated to implement the realignment of United States Marine Corps forces from Okinawa to Guam as envisioned in the United States–Japan Roadmap for Realignment Implementation issued May 1, 2006, until— (1) the Commandant of the Marine Corps, in consultation with the Commander of the United States Pacific Command, provides the congressional defense committees the Commandant’s preferred force lay-down for the United States Pacific Command Area of Responsibility; (2) the Secretary of Defense submits to the congressional defense committees a master plan for the construction of facilities and infrastructure to execute the Commandant’s preferred force lay- down on Guam, including a detailed description of costs and a schedule for such construction; (3) the Secretary of Defense certifies to the congressional defense committees that tangible progress has been made regarding the relocation of Marine Corps Air Station Futenma; (4) a plan coordinated by all pertinent Federal agencies is provided to the congressional defense committees detailing descriptions of work, costs, and a schedule for completion of construction, improvements, and repairs to the non-military utilities, facilities, and infrastructure on Guam affected by the realignment of forces; and (5) the Secretary of Defense— (A) submits to the congressional defense committees the report on the assessment of the United States force posture in East Asia and the Pacific region required under section 346 of this Act; or (B) certifies to the congressional defense committees that the deadline established under such section for the submission of such report has not been met. (b) DEVELOPMENT OF PUBLIC INFRASTRUCTURE.— (1) AUTHORIZATION REQUIRED.—Notwithstanding any other provision of law, if the Secretary of Defense determines that any grant, cooperative agreement, transfer of funds to another Federal agency, or supplement of funds available in fiscal year 2012 under Federal programs administered by agencies other than the Department of Defense will result in the development (including repair, replacement, renovation, conversion, improvement, expansion, acquisition, or construction) of public infrastructure on Guam, such grant, transfer cooperative agreement, or supplemental funding shall be specifically authorized by law. (2) PUBLIC INFRASTRUCTURE DEFINED.—In this section, the term ‘‘public infrastructure’’ means any utility, method of transportation, item of equipment, or facility under the control of a public entity or State or local government that is used by, or constructed for the benefit of, the general public. (c) EXCEPTION TO RESTRICTION ON USE OF FUNDS.—The Secretary of Defense may use funds described in subsection (a) to carry out additional analysis under the National Environmental Policy Act of 1969 to include the following actions: (1) A re-evaluation of live-fire training range complex alternatives, based upon the application of probabilistic modeling; and (2) The ongoing analysis on the impacts of the realignment and build-up on Guam as described in subsection (a) on coral reefs in Apra Harbor, Guam. Estimated budget (dollars in millions) Cost incurred from inception to December 31, 2012 (dollars in millions) Estimated remaining cost to complete (dollars in millions) In addition to the contact named above, Laura Durland, Assistant Director; Jeff Hubbard; Gilbert Kim; Joanne Landesman; Ying Long; Charles Perdue; Carol Petersen; Karen Richey; Michael Shaughnessy; Amie Steele; and Lindsay Taylor made key contributions to this report. Force Structure: Improved Cost Information and Analysis Needed to Guide Overseas Military Posture Decisions. GAO-12-711. Washington, D.C.: June 6, 2012. Military Buildup on Guam: Costs and Challenges in Meeting Construction Timelines. GAO-11-459R. Washington, D.C.: June 27, 2011. Defense Infrastructure: The Navy Needs Better Documentation to Support Its Proposed Military Treatment Facilities on Guam. GAO-11-206. Washington, D.C.: April 5, 2011. Defense Management: Additional Cost Information and Stakeholder Input Needed to Assess Military Posture in Europe. GAO-11-131. Washington, D.C.: February 3, 2011. Defense Planning: DOD Needs to Review the Costs and Benefits of Basing Alternatives for Army Forces in Europe. GAO-10-745R. Washington, D.C.: September 13, 2010. Defense Management: Improved Planning, Training, and Interagency Collaboration Could Strengthen DOD’s Efforts in Africa. GAO-10-794. Washington, D.C.: July 28, 2010. Defense Management: U.S Southern Command Demonstrates Interagency Collaboration, but Its Haiti Disaster Response Revealed Challenges Conducting a Large Military Operation. GAO-10-801. Washington, D.C.: July 28, 2010. National Security: Interagency Collaboration Practices and Challenges at DOD’s Southern and Africa Commands. GAO-10-962T. Washington, D.C.: July 28, 2010. Defense Infrastructure: Guam Needs Timely Information from DOD to Meet Challenges in Planning and Financing Off-Base Projects and Programs to Support a Larger Military Presence. GAO-10-90R. Washington, D.C.: November 13, 2009. Defense Infrastructure: DOD Needs to Provide Updated Labor Requirements to Help Guam Adequately Develop Its Labor Force for the Military Buildup. GAO-10-72. Washington, D.C.: October 14, 2009. Ballistic Missile Defense: Actions Needed to Improve Planning and Information on Construction and Support Costs for Proposed European Sites. GAO-09-771. Washington, D.C.: August 6, 2009. Force Structure: Actions Needed to Improve DOD’s Ability to Manage, Assess, and Report on Global Defense Posture Initiatives. GAO-09-706R. Washington, D.C.: July 2, 2009. Defense Infrastructure: Planning Challenges Could Increase Risks for DOD in Providing Utility Services When Needed to Support the Military Buildup on Guam. GAO-09-653. Washington, D.C.: June 30, 2009. Defense Management: Actions Needed to Address Stakeholder Concerns, Improve Interagency Collaboration, and Determine Full Costs Associated with the U.S. Africa Command. GAO-09-181. Washington, D.C.: February 20, 2009. Defense Infrastructure: Opportunity to Improve the Timeliness of Future Overseas Planning Reports and Factors Affecting the Master Planning Effort for the Military Buildup on Guam. GAO-08-1005. Washington, D.C.: September 17, 2008. Force Structure: Preliminary Observations on the Progress and Challenges Associated with Establishing the U.S. Africa Command. GAO-08-947T. Washington, D.C.: July 15, 2008. Defense Infrastructure: Planning Efforts for the Proposed Military Buildup on Guam Are in Their Initial Stages, with Many Challenges Yet to Be Addressed. GAO-08-722T. Washington, D.C.: May 1, 2008. Defense Infrastructure: Overseas Master Plans Are Improving, but DOD Needs to Provide Congress Additional Information about the Military Buildup on Guam. GAO-07-1015. Washington, D.C.: September 12, 2007. Military Operations: Actions Needed to Improve DOD’s Stability Operations Approach and Enhance Interagency Planning. GAO-07-549. Washington, D.C.: May 31, 2007. Defense Management: Comprehensive Strategy and Annual Reporting Are Needed to Measure Progress and Costs of DOD’s Global Posture Restructuring. GAO-06-852. Washington, D.C.: September 13, 2006.
DOD has stated that it intends to rebalance its defense posture toward the Asia-Pacific region. Japan hosts the largest U.S. forward-operating presence in this region; the majority of the U.S. forces in Japan are located in Okinawa. The United States and Japan planned to reduce the U.S. military presence on Okinawa by relocating approximately 9,000 Marines. DOD had originally planned to move the Marines only to Guam, but revised its plans in 2012 to include other locations in the Pacific. Congressional committees have directed GAO to examine DOD's initiatives in the Pacific, focusing on planning and costs. This report discusses the extent to which DOD has (1) developed a comprehensive cost estimate for the realignment of Marines, (2) planned for and synchronized other movements to coincide with the realignment, and (3) identified plans to sustain the force until all initiatives are implemented. To address these objectives, GAO reviewed relevant policies and procedures, reviewed and analyzed cost documents related to the realignment initiatives, interviewed DOD officials, and conducted site visits at U.S. military installations in the Pacific. The Department of Defense's (DOD) preliminary cost estimate for its current realignment plan is not reliable, because it is missing costs and is based on limited data. According to DOD officials, DOD has not yet been able to put together a more reliable cost estimate because it will not have specific detailed information on the plan's requirements until the completion of environmental analyses and host nation negotiations. Currently, DOD estimates that it would cost approximately $12.1 billion to implement its realignment plan--not including the Australia segment of the realignment. Still, GAO found that DOD did not include some up-front practices that could have provided a more reliable estimate that are not dependent on the completion of the environmental analyses and host nation negotiations. Specifically, DOD omitted any costs associated with mobility support, a critical component of the implementation, from its cost estimate. Furthermore, although DOD based its cost estimate on several assumptions, there was no evidence DOD conducted analysis needed to determine the reliability of those assumptions. Without a reliable estimate, DOD will not be able to provide Congress and other stakeholders with the information Congress needs to make informed decisions regarding the realignment. DOD has not developed an integrated master plan for its current realignment plan, and it has not developed a strategy to support the development and oversight of the Japanese construction projects associated with other realignment initiatives. DOD has taken initial steps to develop an integrated scheduling document based on currently known data, but indicated that specific requirements, schedules, and costs cannot be formalized in an integrated master plan until several studies and host nation negotiations are completed, which will take several years. Developing a master plan could enhance the management of the realignment by creating a systematic approach to planning, scheduling, and execution. In addition, DOD has not developed a strategy that identifies the resources needed to support the development of and oversight for these projects. According to best practices, a strategy identifies goals and resources and supports the implementation of a program. Without the information contained in an integrated master plan and a construction support strategy, Congress will be unable to make informed decisions about the order in which it needs to provide funding to support the realignment. DOD has taken some steps to plan for the sustainment of U.S. forces on Okinawa and Guam, but it has yet to fully identify sustainment needs and costs for both locations during this period. At several installations on Okinawa, some of the infrastructure has severely deteriorated. DOD facilities planning guidance calls for updated facility master plans that capture requirements and propose solutions. On Guam, DOD has been maintaining an inventory of unoccupied family housing that could potentially be used for Marines relocating to Guam. However, DOD has not determined all the costs and benefits of maintaining this housing or the Marines' potential housing requirements--information needed to perform an economic analysis. Without an estimate of the sustainment requirements for Okinawa, the costs for maintaining housing, and the potential Marine requirements for housing on Guam, DOD will be unable to make informed decisions on whether continued investment in sustaining these facilities is warranted. GAO recommends that DOD develop more reliable cost estimates and an integrated master plan for the realignment of Marines, develop a mechanism to share annual updates on the status of each, and identify sustainment requirements for affected facilities until realignment initiatives are complete. DOD generally agreed with GAO's recommendations.
In April 2003, the Secretary of Defense charged the military services with supporting six transformational objectives. These objectives not only included a reiteration of the department’s goal to fully implement total asset visibility, but also clearly reflected a growing recognition of the importance of cost information in fulfilling the logistics mission. Further, when DOD released its first Enterprise Transition Plan in 2005, these objectives, including the importance of financial information visibility for use in decision making, were embodied into the department’s six strategic business enterprise transformation priorities. Both DOD and the Air Force have initiatives under way to improve their ability to link financial resources to associated assets, programs, and activities or missions. For example, both the department and the Air Force have efforts underway, such as the Unique Item Identification and Radio Frequency Identification initiatives and the Standard Financial Information Structure (SFIS) initiative, to improve their ability to identify and track assets, including costs, throughout their life cycle. The Air Force, as a DOD component, is confronted with similar management challenges that must be effectively resolved if it is to improve its business operations and in turn provide better support to the warfighter. The following highlights some of the asset management challenges the Air Force is attempting to resolve to achieve total asset visibility. Excess inventory. We have previously reported that more than half of the Air Force’s secondary inventory (spare parts), worth an average of $31.4 billion, was not needed to support required on-hand and on-order inventory levels from fiscal years 2002 through 2005. The Air Force has continued to purchase unneeded on-order inventory because its policies do not provide incentives to reduce the amount of inventory on order that is not needed to support requirements. Financial management and reporting. The DOD Inspector General reported in November 2007, and the Air Force acknowledged, that the Air Force continues to have significant internal control deficiencies that impede the ability of its general and working capital funds to produce accurate and reliable information on the results of their operations. Deficiencies were found in the following areas: (1) financial management systems, (2) government furnished and contractor-acquired materiel (general fund), (3) environmental liabilities, (4) operating materials and supplies, (5) accounting entries, (6) property, plant, and equipment, and (7) in-transit inventory (working capital fund). Deployed assets. In January 2007, the Air Force Audit Agency reported that the Air Force had lost control and accountability over 5,800 assets, valued at approximately $108 million, in part because its logistical systems did not provide Air Force personnel with the capability to effectively manage, track, and monitor deployed assets. For example, the system incorrectly reported that assets were deployed to closed bases. Additionally, the systems did not provide reliable asset information, such as asset quantities and location. As a result of these weaknesses, Air Force management did not have total asset visibility and was not able to determine if the right assets were at the right location to meet mission requirements. Government furnished material. In January 2007, the Air Force Audit Agency also reported that the Air Force did not effectively manage government furnished material. More specifically, the Air Force Audit Agency reported that the Air Force logistics personnel inappropriately provided government furnished material to contractors that were not authorized by contract documentation to receive this material. This problem could adversely affect mission support if the Air Force loses assets that should be in inventory. In addition, poor accountability controls increase the Air Force’s susceptibility to fraud and misuse of government resources. Successful transformation of DOD’s business operations, including the achievement of total asset visibility, will require a multifaceted, cross- organizational approach that addresses the contribution and alignment of key elements, including strategic plans, people, processes, and technology. The following highlights key DOD and Air Force transformation plans that are aimed at enhancing business operations and supporting the department’s total asset visibility goal. Enterprise Transition Plan. DOD guidance states that the Enterprise Transition Plan is intended to provide a road map for achieving DOD’s business transformation through technology, process, and governance improvements. According to DOD, the Enterprise Transition Plan is intended to summarize all levels of transition planning information (milestones, metrics, resource needs, and system migrations) as an integrated product for communicating and monitoring progress—resulting in a consistent framework for setting priorities and evaluating plans, programs, and investments. DOD updates the Enterprise Transition Plan twice a year, once in March as part of DOD’s annual report to Congress and again in September. Although the Enterprise Transition Plan provides an overall strategy and corresponding metrics for achieving each of the department’s six business enterprise priorities, DOD officials have acknowledged improvements are needed in the plan to provide a clearer assessment of the department’s transformation effort. DOD officials have also acknowledged the need for an integrated planning process and results-oriented measures to assess overall business transformation. Financial Improvement and Audit Readiness (FIAR) Plan. A major component of DOD’s business transformation effort is the Defense FIAR Plan. The FIAR Plan is updated twice a year and is intended to provide DOD components with a framework (audit readiness strategy) for resolving problems affecting the accuracy, reliability, and timeliness of financial information and obtaining clean financial statement audit opinions. The FIAR Plan’s audit readiness strategy consists of six phases: (1) discovery and correction, (2) segment assertion, (3) audit readiness validation, (4) audit readiness sustainment, (5) financial statement assertion, and (6) financial statement audit. Each military service is required to develop subordinate plans that are to support the FIAR Plan in achieving its objectives. Air Force Financial Management Strategic Plan for Fiscal Years 2007-2012. This plan identifies seven financial management goals for transforming the Air Force’s financial management operations. Those goals are (1) foster mutual respect and integrity, (2) reduce Air Force cost structure, (3) expand partnership in strategic Air Force decisions, (4) recruit, prepare, and retain a well-trained and highly educated professional team for today and tomorrow, (5) provide customers with world-class financial services, (6) implement open, transparent business practices, and achieve a clean financial statement audit, and (7) continuously streamline financial management processes and increase capabilities. In addition, the plan also identifies specific objectives for each goal, some of the actions that will be taken to accomplish the objectives, and 13 financial management metrics. Air Force Logistics Enterprise Concept of Operations. This document presents a collection of high-level requirements for transforming Air Force logistics. It establishes the process framework, standards, and guidelines to define the environment in which future logistics systems can be identified, acquired, or built. Further, it aims to serve as a catalyst for developing doctrine, policies, and organizational structure consistent with the vision outlined in the Air Force Expeditionary Logistics for the 21st Century Campaign Plan, needed to enable logistics transformation. Air Force Information Reliability and Integration Action Plan/ Financial Improvement Plan. This plan describes actions planned to identify and address impediments to the Air Force’s ability to achieve clean financial statement audit opinions. The Air Force Information Reliability and Integration Action Plan, commonly referred to as the Air Force Financial Improvement Plan, includes specific tasks, completion dates, start dates, owner/lead components, and points of contact for addressing weaknesses adversely affecting the reliability of individual Air Force financial statement line items and is intended to support the department’s FIAR Plan. Air Force Military Equipment Accountability Improvement Plan. This plan is intended to define how the Air Force will implement measures to properly collect, account for, track, and report military equipment values. This plan is intended to identify the actions required to resolve any existing problems or impediments to achieving auditable values for military equipment items. The Air Force Military Equipment Accountability Improvement Plan is intended to be incorporated into the Air Force’s Financial Improvement Plan and DOD’s FIAR Plan. ECSS and DEAMS are two business systems initiatives identified by the Air Force that are intended to help it address asset accountability weaknesses and achieve its total asset visibility goal. While these programs are intended to provide the Air Force with the full spectrum of logistics and financial management capabilities, our review identified areas where the Air Force had not fully implemented key best practices related to risk management for ECSS and DEAMS and system testing for DEAMS. ECSS and DEAMS are intended to support the Air Force’s efforts to transform its business operations and provide accurate, reliable, and timely information to support decision making and management of the Air Force’s business operations, including total asset visibility. The ECSS program was initiated in January 2004 and is expected to provide a single, integrated logistics system, including transportation, supply, maintenance and repair, and other key business functions directly related to logistics, such as engineering and acquisition, at a total life-cycle cost over $3 billion. Initially, the Air Force anticipated achieving full operational capability of ECSS during fiscal year 2012. Due to delays as a result of two contract award protests, the Air Force now expects ECSS to reach full operational capability in fiscal year 2013. When fully implemented, ECSS is expected to replace about 250 legacy logistics and procurement (acquisition) systems and support over 250,000 users Air Force-wide. ECSS is considered a key element in the Air Force’s efforts to reengineer and transform its supply chain operations from a reactive posture to a more predictive posture that facilitates greater effectiveness and efficiency in the Air Force’s logistics operations that support the warfighter. ECSS is intended to interface with DEAMS to provide the Air Force with improved financial visibility over Air Force assets. Additionally, implementation of ECSS is expected to address long-standing weaknesses in supply chain management, a DOD issue that has been on our high-risk list since 1990. In this regard, the redesign of the Air Force’s supply chain operations, in part through implementation of ECSS, is expected to address four broad Air Force logistical issues: (1) lack of an enterprise view, (2) fragmented planning processes, (3) lack of process integration, and (4) no enterprise-level systems strategy. Figure 1 provides information related to ECSS’s timeline for implementation and funding. Currently, the program is undergoing a process referred to as “blueprinting” to identify needed interfaces and data requirements. After blueprinting is completed in fiscal year 2009, the Air Force will begin system testing and initial implementation of ECSS. As of December 2007, the Air Force reported that approximately $250 million had been obligated in total for the ECSS effort. As shown in figure 1, the Air Force estimates a total life-cycle cost of $3 billion; however, the total life-cycle cost of ECSS is likely to increase due to an Air Force decision to add functionality. In January 2008, Air Force ECSS program management officials informed us that ECSS would assume financial management control and accountability, including invoice processing and financial reporting responsibility, for the Air Force’s working capital fund operations. Prior to this decision, the Air Force had designated DEAMS as the business system initiative it intended to use to improve the financial management capabilities of both the Air Force’s working capital and general funds. The Air Force is currently in the process of determining the cost of this decision and how much it will add to its already recognized funding shortfall for ECSS of approximately $697 million. According to ECSS program management office officials, ECSS’s funding shortfall resulted from contract order award protests that caused stop-work actions. As a result of the stop-work actions, the ECSS program management office was not able to spend money for work as planned, which caused the Air Force to reallocate the money to other Air Force requirements, ultimately resulting in unfunded ECSS requirements. The DEAMS program was initiated in August 2003 and is expected to provide general fund accounting for the entire Air Force at a total life-cycle cost of over $1 billion. In the past, lack of integration between business systems, including logistics and financial management, have adversely affected the ability of DOD and the Air Force to control costs, ensure basic accountability, anticipate future costs and claims on the budget, measure performance, maintain funds control, and prevent fraud. If the information contained in asset and financial accountability systems is not accurate, complete, and timely, the Air Force’s day-to-day operations could be adversely affected by, for example, investment in inventory that is not needed to meet current needs or for which the Air Force had not allocated sufficient resources or authority to purchase. Both physical and financial accountability are essential to achieving total asset visibility and DOD’s objective of providing information to support decision making. According to Air Force officials, DEAMS will replace seven legacy accounting systems. As depicted in figure 2, Air Force program management officials expect DEAMS to reach initial operational capability during fiscal year 2011 and full operational capability by fiscal year 2014 with a total life-cycle cost of about $1.1 billion. DOD defines total life-cycle cost as the total cost to the government of acquisition and ownership of that system over its useful life. It includes the cost of acquisition, operations, and support (to include manpower), and where applicable, disposal. Figure 2 provides information related to DEAMS’s timeline for implementation and funding. The DEAMS business system initiative was approved by the Office of the Secretary of Defense Business Management Modernization Program’s Financial Management Transformation Team as a joint United States Transportation Command (Transportation Command), Defense Finance and Accounting Service, and Air Force project. According to Air Force officials, DEAMS will be implemented in two increments—the first at the Transportation Command and the second at the Air Force. During the first incremental deployment of DEAMS, which began at Scott Air Force Base, Illinois, on July 27, 2007, approximately 200 users within the Transportation Command, the Air Force’s Air Mobility Command component, and other selected tenant organizations at Scott Air Force Base, began to receive limited accounting capabilities (starting with commitment accounting). As of December 2007, the Air Force reported that approximately $119 million had been obligated for this system. By the end of the increment 1 deployment phase, which is expected to be completed by December 2010, DEAMS is intended to provide Scott Air Force Base with the entire spectrum of core financial management capabilities, including collections, commitments/obligations, cost accounting, general ledger, funds control, receipt and acceptance, accounts payable and disbursement, billing, and financial reporting. Deployment of DEAMS to an estimated 28,000 users at other Air Force locations will occur during the DEAMS increment 2 deployment phase. The Air Force had not yet fully embraced or implemented key business system best practices in several areas. Best practices are tried and proven methods, processes, techniques, and activities that organizations define and use to minimize program risks and maximize the chances of a program’s success. Collectively, these practices are intended to reasonably ensure that the investment in a given system represents the right solution to fill a mission need—and if the solution is right, that acquisition and deployment are done the right way, meaning that they maximize the chances of delivering defined system capabilities on time and within budget. Specifically, we found that the Air Force had not fully implemented key best practices related to risk management for ECSS and DEAMS and system testing for DEAMS. These findings increase the risk that these two business systems will not meet their stated functionality, cost, and milestone goals or effectively further the Air Force’s efforts to achieve total asset visibility. The Air Force did not have reasonable assurance that its risk management process would accomplish its primary purpose—managing a program’s risks to acceptable levels by taking the actions necessary to identify and mitigate the adverse effects of risks before they affect the program. The objective of a well-managed risk management program is to provide a repeatable process for balancing cost, schedule, and performance goals within program funding. According to DOD’s Risk Management Guide for DOD Acquisition, risk management is most effective if it is fully integrated within a program. Our analysis of the ECSS and DEAMS risk management programs found that neither program used a comprehensive and fully integrated risk management process. Program risk was monitored, overseen, and managed independently by various groups or activities within the program without adequate visibility, at the program management level. Without adequate visibility of risk management activities programwide, the program management office has little assurance of the sufficiency of actions taken by its subordinate groups or activities to identify, analyze, and mitigate risk that may affect other groups or the program itself. A single risk management process for each program with clear linkages to subordinate risk management activities throughout the program would provide greater visibility and assurance that appropriate actions are taken to identify and address risks. Acquiring software is a risky endeavor and risk management processes are intended to help the program manager and senior leadership ensure that actions are taken to mitigate the adverse effects of each determined program risk. If program risks are not effectively communicated and managed, then the risks will manage the program, potentially leading to increased costs to ultimately address the impact of a realized risk or implement a program that does not provide the intended capabilities. The following highlights specific risk management issues that we identified within the Air Force’s current approach. Interfaces. Our analysis of ECSS and DEAMS risk management processes found that even when risks were identified at lower levels within a program, the level of detail at the program level was not always sufficient to provide program managers with the visibility needed to effectively assess and manage certain risks at those levels. Although the ECSS and DEAMS program management offices identified interfaces as potential areas of risk at lower levels within the program, we found that neither program management office consistently identified interfaces as a risk at the program level. In the case of DEAMS, the information in the program level risk management system did not disclose that 70 key interfaces must be dealt with in order to implement the system, even though this level of detail was maintained at a lower level by the DEAMS Interface and Conversion Group. Without visibility of risks identified at all levels of a program, it is difficult, if not impossible, for the program manager or other senior-level officials to ascertain if the various risks that are associated with a program of this magnitude are effectively identified and managed. We have previously reported that interfaces are critical elements necessary to successfully implement a new system and failure to properly address risk in interface areas has contributed to the system failures of previous agency efforts. Data conversion. In implementing ECSS and DEAMS, the Air Force will have to expend considerable resources to clean-up and transfer the data in the existing legacy systems to ECSS or DEAMS. However, we found that only the ECSS risk management program identified data quality as an issue in its discussion of data conversion. Much like system interfaces, each effort to convert data needs to be separately identified and managed so that (1) the risks associated with a given effort can be identified, (2) adequate mitigating actions can be developed for those risks, and (3) the effectiveness of the mitigating actions can be monitored. For example, in June 2005, we reported that data conversion problems seriously affected the Army’s ability to implement its Logistics Modernization Program at the Tobyhanna Army Depot, Tobyhanna, Pennsylvania. These problems affected reporting of revenue earned, accountability over orders received from customers, and prepared billings. As discussed in our July 2007 report, the Army and its contractor still had not resolved the issues of customers being improperly billed. Change management. The DEAMS program management office did not identify change management as a risk in its risk management system; however, it was included as a risk by the ECSS program management office. Change management is the process of preparing users for the changes that should occur with the implementation of a new system. It involves engaging users and communicating the nature of anticipated changes to system users through training on how jobs will change. This is necessary because commercial products are created with the developers’ expectations of how they will be used, and the products’ functionality may require the organization implementing the system to change existing business processes. However, neither the ECSS or DEAMS program had identified training as a potential change management risk at the program level. As discussed previously, the lack of sufficient transparency of risks identified by the lower levels at the program level may impede the ability of ECSS and DEAMS program managers and senior-level officials to ensure that risks are effectively mitigated. Further, the lack of centralized visibility may also minimize program efficiencies that could be gained through shared knowledge of risks identified by other groups within the program and actions planned or taken to mitigate them. As we have previously reported, having staff with the appropriate skills is a key element for achieving financial management improvement. The implementation of a new system is intended to bring about improvements in the way an entity performs its day-to-day business operations. We have issued several reports that associated the lack of effective change management to program schedule slippages. Unless those intended changes are clearly identified and communicated to the affected employees, the changes in the organization’s business processes may not occur or be less effective and efficient than envisioned. Contractor oversight. The Air Force’s ability to manage these two programs—including oversight of contractors—is critical to reducing the risks to acceptable levels. Both ECSS and DEAMS program management officials identified staffing shortfalls within their respective offices as program risks. In addition, both offices identified actions needed to mitigate the impact the shortfalls may have on their programs. However, neither program management office considered whether their programs had staff with the appropriate skill sets to effectively oversee and manage their respective contractors. Since the contractors for each program are performing many of the key tasks, including how the system will perform and what information or capabilities it will provide, it is critical that the Air Force have an effective monitoring process to oversee the contractors and ensure that the project management processes employed by contractors were effectively implemented. During discussions on their respective programs, in March 2008, both ECSS and DEAMS program management officials stated that they thought their existing risk management programs provided adequate visibility over risks within their respective programs. However, after discussing our concerns with the program management officials, they agreed with us that their program level risk management programs could be improved to provide better links to the various risks identified and the risk management processes used by the groups within their programs. They also agreed that this would help them achieve reasonable assurance that their decentralized risk management program is achieving the objectives of a more traditional centralized risk management process. A limited version of DEAMS was deployed at Scott Air Force Base in July 2007. A follow-on deployment intended to provide DEAMS functionality to additional users, originally scheduled for October 2007, was placed on hold to address a series of software and connectivity issues that were identified after the initial deployment. According to Air Force DEAMS program management officials, DEAMS was functioning as intended on the older Air Force standard computer desktop configuration; however, problems occurred when the system was deployed to offices that were utilizing a newer computer desktop configuration than the one the program management office had utilized in its initial tests. Air Force DEAMS program management officials stated that they did not include the potential of encountering different operating environments at deployment locations as a potential program risk because they thought that there was a standard computer desktop configuration across the Air Force and therefore the risk was remote. DEAMS program management officials acknowledged that the standardization of computer desktops across the Air Force is a major challenge and that encountering it during the DEAMS deployment at Scott Air Force Base was a “lessons learned.” Further, DEAMS program management officials stated that system “patches” to address the problem have been tested on multiple computer desktop configurations at Scott Air Force Base to ensure that DEAMS operates as intended at that location. According to DEAMS program management officials, they started the redeployment of DEAMS at the end of March 2008, and they do not anticipate that this will result in a significant delay, if any, toward achieving full deployment of DEAMS within fiscal year 2014. However, unless the DEAMS program management office obtains a clear understanding of the environment in which DEAMS will be deployed, DEAMS will likely suffer additional implementation delays. Further, ECSS is also likely to encounter nonstandardized computer desktop configurations during its deployment. Both ECSS and DEAMS program management officials acknowledged that nonstandardized computer desktop configurations will continue to represent a potential program risk and indicated that they intend to test desktop configurations at each deployment location in the future. DEAMS program management officials are working with its contractor and other Air Force personnel to develop a long-term solution for the DEAMS program. Viewed from a broad perspective, the Air Force does not have a single comprehensive plan or integrated set of plans to support DOD business transformation priorities, transform Air Force business operations, and achieve total asset visibility. Rather, the Air Force is utilizing several individual business transformation plans and efforts. Our analysis of these plans disclosed that they are neither fully integrated with each other nor are they fully aligned with business transformation priorities and related performance measures or metrics outlined in DOD’s Enterprise Transition Plan. Integration and coordination of improvement efforts within a component and clear alignment of those efforts with DOD’s Enterprise Transition Plan is necessary to achieve both the components’ and DOD’s business transformation priorities and goals, including total asset visibility. Without clear alignment of transformation plans, priorities, and metrics, both DOD and the Air Force will have difficulty (1) ensuring that transformation efforts, such as ECSS and DEAMS, are efficiently and effectively directed at achieving DOD’s business transformation priorities/goals, including total asset visibility, and (2) measuring and reporting on progress toward the capabilities necessary for achieving an intended business transformation priority, such as financial and materiel visibility. Air Force officials acknowledged that integration of their plans within the Air Force and with the DOD’s Enterprise Transition Plan could be improved and indicated that they intend to make improvements to their plans. By not fully aligning and integrating these transformation strategies and plans, the Air Force risks falling short of significantly enhancing its ability to provide the right equipment and materiel, in the right condition, at the correct place, when needed to support the warfighter. Our review of several Air Force strategic documents and plans, such as its Financial Management Strategic Plan, Accountability Improvement Plan, and Logistics Enterprise Architecture Concept of Operations, found that the plans were not clearly linked to each other or with DOD’s Enterprise Transition Plan. Air Force Financial Management Strategic Plan. This plan outlines seven goals for transforming Air Force financial management. However, the plan contains no reference to the priorities, objectives, or capabilities identified in DOD’s Enterprise Transition Plan. Additionally, the Air Force Financial Management Strategic Plan does not identify any performance measures or metrics that the Air Force intends to use to measure incremental progress toward achieving its own stated financial management goals or DOD’s business transformation priorities. It is also unclear how certain Air Force financial management goals, such as to “foster mutual respect and integrity” or “recruit, prepare, and retain a well- trained and highly educated professional team for today and tomorrow,” specifically relate to achieving the four financial visibility objectives identified in DOD’s Enterprise Transition Plan: (1) produce and interpret relevant, accurate, and timely financial information that is readily available for analyses and decision making, (2) link resource allocation to planned and actual business outcomes and warfighter missions, (3) produce comparable financial information across organizations, and (4) achieve audit readiness and prepare auditable financial statements. Air Force Military Equipment Accountability Improvement Plan. This plan is intended to support the department’s valuation of military equipment and the Air Force’s and DOD’s goal to obtain auditable financial statements. However, the relationship between the Air Force Military Equipment Accountability Improvement Plan to other Air Force transformation plans or initiatives, such as the Air Force Logistics Enterprise Architecture Concept of Operations, in transforming the Air Force’s business operations is not articulated in the plan. For example, although the Under Secretary of Defense for Acquisition, Logistics, and Technology tasked the Air Force and other military components with preparing a military equipment accountability improvement plan, the plan does not explain how resolution of these problems will support the Air Force’s logistics goals to improve operational capability, while minimizing the cost to deliver capability. Further, the Air Force Military Equipment Accountability Improvement Plan does not discuss how its efforts contribute, individually or as part of a collective Air Force effort, to incremental and measurable improvements in the visibility of Air Force logistical and financial information for decision making, analysis, and reporting—a key transformation priority identified in DOD’s Enterprise Transition Plan. None of the various Air Force strategic plans we analyzed included performance measures or metrics that could be used to systematically assess and report on transformation progress. Without adequate metrics, both Air Force and DOD management face a difficult challenge in monitoring implementation of Air Force plans and assessing the Air Force’s progress in improving its processes, controls, and systems and achieving DOD’s business transformation priorities, including total asset visibility. Our prior work has identified at least four characteristics common to successful hierarchies of performance measures or metrics: (1) demonstrated results, (2) limited to a vital few, (3) corresponding to multiple priorities, and (4) linked to responsible programs. Simply stated, performance measures should tell each organizational level how well it is achieving its own and shared goals and priorities. Examples of the lack of consistent metrics follow. Air Force Logistics Enterprise Architecture Concept of Operations. None of the six materiel visibility business capability improvement metrics included in the DOD Enterprise Transition Plan are identified in the Air Force Logistics Enterprise Architecture Concept of Operations. Further, the Air Force Logistics Enterprise Architecture Concept of Operations identified only two measures or goals: (1) increase equipment availability by 20 percent no later than fiscal year 2011 and (2) reduce annual operating and support cost by 10 percent no later than fiscal year 2011. While these are notable goals, these metrics do not provide a means to measure incremental progress in improving the Air Force’s ability to locate and account for materiel assets throughout their life cycle. Air Force Financial Management Strategic Plan. This plan identified 13 metrics, some of which pertained to reducing interest penalties paid, lost discounts, and unmatched disbursements, to support an assessment of the current state of the Air Force’s financial management. However, the Air Force Financial Management Strategic Plan did not include metrics that the Air Force can use to measure the progress of its various financial management initiatives in transforming the Air Force’s financial management and related business operations and achieving DOD business transformation priorities. For example, none of the 13 metrics outlined in the Air Force Financial Management Strategic Plan could be used to measure, monitor, or report incremental progress toward producing and interpreting relevant, accurate and timely financial information that is readily available for analyses and decision making—a key financial visibility objective identified in DOD’s Enterprise Transition Plan. Air Force Military Equipment Accountability Improvement Plan and its Financial Improvement Plan. Neither plan included performance metrics to measure the effectiveness of planned actions to resolve identified weaknesses that have adversely affected the reliability of reported financial and physical accountability information. Specifically, we found that the Air Force’s status reporting for both initiatives consisted primarily of the completion of milestone dates associated with steps outlined by DOD in its FIAR Plan for achieving auditability of its financial statements. As a result, the Accountability Improvement Plan and the Financial Improvement Plan provide little information on incremental improvements made in the Air Force’s financial management capabilities, including decision making support. Moreover, when we compared the Financial Improvement Plans dated August 1, 2007, and October 11, 2007, we identified numerous inconsistencies that raise concerns regarding the oversight and monitoring provided to these plans and their reported progress. For example, we found 211 of the total 1,762 tasks in the October 2007 Financial Improvement Plan had completion dates identified as prior to October 1, 2007; however, the reported progress toward completion for each of these tasks was identified as zero, and 61 of the total 1,279 tasks that were included in both the August 2007 and October 2007 Financial Improvement Plans showed a decline in the percentage completion total reported for the same tasks between the two plans. The Air Force’s efforts to transform its logistics and financial management operations through system, process, and control changes are being guided by numerous strategies and plans that are not fully integrated within the Air Force and with DOD’s business enterprise transformation priorities. As the Air Force deploys ECSS and DEAMS, it is important that it utilize a comprehensive and integrated risk management process to identify, analyze, and mitigate risks and configuration issues that may impede successful deployment of these systems throughout the Air Force, such as testing computer desktop configurations at each deployment location. Additionally, successful transformation will require a comprehensive plan or integrated set of plans and effective processes and tools, such as results-oriented performance measures that link enterprise and unit goals and expectations, for measuring, monitoring, and reporting progress in accomplishing the department’s priorities. Until the Air Force’s efforts are aligned within the Air Force and with DOD’s business transformation priorities, and best practices are fully adopted to minimize risk and maximize chances for success, the risk increases that billions of dollars will be wasted and the efforts will not achieve the transformation envisioned for the future. To improve the department’s efforts to achieve total asset visibility and further enhance its efforts to improve control and accountability over business system investments and achieve its business transformation priorities, we recommend that the Secretary of Defense direct the Secretary of the Air Force to take the following three actions: Direct Air Force program management officials for ECSS and DEAMS to ensure that risk management activities at all levels of the program are identified and communicated to program management to facilitate oversight and monitoring. Key risks described at the appropriate level of detail should include and not be limited to risks associated with interfaces, data conversion, change management, and contractor oversight. Direct the Air Force program management offices to test ECSS and DEAMS on relevant computer desktop configurations prior to deployment at a given location. Direct Air Force organizations responsible for the business transformation plans discussed in this report to align their respective plans, including efforts aimed at achieving total asset visibility, with priorities included in DOD’s Enterprise Transition Plan. Further, these plans should include metrics to measure, monitor, and report progress in accomplishing the business priorities identified in DOD’s Enterprise Transition Plan. We received written comments on a draft of this report from the Deputy Under Secretary of Defense (Business Transformation), which are reprinted in appendix II. DOD concurred with our recommendations and identified specific actions it plans to take to implement these recommendations. For example, the ECSS program management office has added GAO-identified risks to its inventory of program risks. Additionally, the DEAMS program management office intends to centralize two subordinate risk management activities into a single program-level risk management process. Further, in its rewrite of the DEAMS program charter for the department’s Business Capability Lifecycle process, the DEAMS program management office stated its intent to implement a program-based risk management process that addresses all risk areas noted by GAO. In addition, the department noted that the Air Force is updating its Financial Improvement Plan to assure alignment with the department’s Financial Improvement and Audit Readiness plan. DOD stated that the Air Force will ensure that the Financial Improvement Plan is aligned to the Air Force Financial Management Strategic Plan and DOD’s Enterprise Transition Plan. We are sending copies of this report to the Secretary of Defense; Secretary of the Air Force; Deputy Under Secretary of Defense (Business Transformation); Assistant Secretary of the Air Force (Financial Management and Comptroller); Air Force Chief Information Officer; Air Force Deputy Chief of Staff (Logistics); and other interested congressional committees and members. Copies of this report will be made available to others upon request. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. Please contact Paula M. Rascona at (202) 512-9095 or [email protected], Nabajyoti Barkakati at (202) 512-4499 or [email protected], or William M. Solis at (202) 512-8365 or [email protected] if you or your staff have questions on matters discussed in this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. In order to determine the implementation status of the Air Force’s current business system initiatives to achieve total asset visibility, and whether the Air Force has implemented related best practices, we reviewed Air Force business system budget documentation and met with Air Force Chief Information Officer personnel and DOD Business Transformation officials. Most of the financial information in this report related to ECSS and DEAMS was obtained from the respective program management offices and is presented for informational purposes only; it was not used to develop our findings and recommendations. We interviewed, obtained briefings, and reviewed documentation provided by ECSS and DEAMS Air Force program management officials, Business Transformation Agency officials, and Air Force Financial Management and Comptroller officials to further our understanding of the intended purpose of each system and their respective roles in supporting the Air Force’s efforts to achieve total asset visibility and transform its business operations. During this audit, we did not review ECSS and DEAMS compliance with the Air Force’s enterprise architecture because of ongoing GAO work focused on ascertaining the status of the military services’ efforts to develop and utilize an enterprise architecture. The results of our work are discussed in our May 2008 report, which noted that while the Air Force’s efforts to develop an enterprise architecture were further ahead of Army and Navy efforts, the Air Force’s architecture was not sufficiently developed to guide and constrain its business systems modernization investments. To determine whether any improvements were needed in the Air Force’s approach for acquiring and implementing these business systems, we evaluated the ECSS and DEAMS risk management programs, reviewed Air Force guidance related to risk management, and obtained an explanation from each program management office on how they managed their respective risk management program. Additionally, we analyzed risk management reports that were prepared by each program management office and reviewed risk management briefings that were presented to senior Air Force management. We compared risk management reports for both programs with applicable Air Force guidance to ascertain if each program identified the risks that are associated with the acquisition and implementation of a system. To determine whether the Air Force’s business transformation efforts to achieve total asset visibility are aligned within the Air Force and with DOD’s broader business transformation priorities, we interviewed officials from the Air Force’s Financial Management and Comptroller Office and the Air Force Logistics Enterprise Architecture and ECSS Transformation Management Division. There are many DOD and Air Force transformation plans and initiatives, such as DOD’s Enterprise Transition Plan and Quadrennial Defense Review Report, and the Air Force Strategic Plan and the Air Force Smart Operations for the 21st Century. However, following discussions with Air Force officials, we focused our review on the Air Force Financial Management Strategic Plan for fiscal years 2007-2012, Logistics Enterprise Architecture Concept of Operations, Financial Improvement Plan for August 2007 and October 2007, and Military Equipment Accountability Improvement Plan issued in December 2006 because they are more directly related to total asset visibility and related business transformation efforts. We analyzed and compared these documents to assess consistency among the plans and approaches both within the Air Force and with DOD’s Enterprise Transition Plan’s business transformation priorities and metrics. We conducted this performance audit from July 2007 through August 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Most of the financial information in this report related to the ECSS and DEAMS programs was obtained from the respective program management offices and is presented for informational purposes only and was not used to develop our findings and recommendations. To assess the reliability of the funding data, we interviewed Air Force program management office officials knowledgeable about funding and reviewed budgetary data on the Air Force’s investment in ECSS and DEAMS. We conducted our work at the DOD Business Transformation Agency, the Air Force Chief Information Officer Office, the Air Force Financial Management and Comptroller Office, and the Air Force Logistics Enterprise Architecture and ECSS Transformation Management Division in Arlington, Virginia. Additionally, we made site visits to the Air Force program management offices for ECSS and DEAMS at Wright-Patterson Air Force Base in Dayton, Ohio. We requested comments on a draft of this report from the Secretary of Defense or his designee. We received written comments from the Deputy Under Secretary of Defense (Business Transformation), which are reprinted in appendix II. In addition to the above contacts, the following individuals made key contributions to this report: J. Christopher Martin, Senior-Level Technologist; Darby Smith, Assistant Director; Evelyn Logue, Assistant Director; F. Abe Dymond, Assistant General Counsel; Beatrice Alff; Harold Brumm, Jr.; Francine DelVecchio; Jason Kelly; Jason Kirwan; Chanetta Reed; Debra Rucker; and Tory Wudtke.
The Department of Defense (DOD) established a goal to achieve total asset visibility over 30 years ago. This initiative aims to provide timely, accurate information on the location, movement, status, and identity of equipment and supplies. To date, the effort has been unsuccessful. GAO was requested to determine (1) the implementation status of the Air Force's business system initiatives to achieve total asset visibility, and whether the Air Force has implemented related best practices, and (2) whether the Air Force's business transformation efforts to achieve total asset visibility are aligned within the Air Force and with DOD's broader business transformation priorities. GAO interviewed Air Force officials and reviewed Air Force documentation to obtain an understanding of the Air Force's system initiatives and strategy for achieving total asset visibility and to identify areas for improvement. The Air Force has identified the Expeditionary Combat Support System (ECSS) and the Defense Enterprise Accounting and Management System (DEAMS) as key technology enablers of the Air Force's efforts to transform its logistics and financial management operations and achieve total asset visibility--a key DOD priority. ECSS is expected to provide a single, integrated logistics system, including transportation, supply, maintenance and repair, and other key business functions directly related to logistics such as engineering and acquisition. Additionally, ECSS will perform financial management and accounting for the Air Force working capital fund operations. ECSS is expected to be fully operational in fiscal year 2013, and replace about 250 legacy logistics and procurement systems. DEAMS is expected to provide the entire spectrum of core financial management capabilities, including collections, commitments/obligations, cost accounting, general ledger, funds control, receipts and acceptance, accounts payable and disbursement, billing, and financial reporting for the Air Force general fund operations. DEAMS is expected to replace seven legacy systems and be fully operational in fiscal year 2014. GAO identified several areas in which the Air Force had not fully implemented best practices related to risk management and system testing. These findings increase the risk that these business system initiatives will not meet their stated functionality, cost, and milestone goals, thereby limiting the Air Force's efforts to achieve total asset visibility and other DOD business transformation priorities. Further, key Air Force business transformation strategic plans and documents were not aligned within the Air Force nor with DOD's broader business transformation priorities. While each individual Air Force plan was intended to support the Air Force's business transformation efforts, the plans did not reflect a coordinated effort toward achieving a stated Air Force or DOD goal. For example, neither the Air Force's Military Equipment Accountability Improvement Plan for supporting DOD's military equipment valuation effort, nor the Air Force Logistics Enterprise Architecture Concept of Operations, its key strategic transformation plan for logistics, identified a shared relationship, including metrics, in supporting Air Force and DOD logistics and financial management transformation goals. As a result, neither the Air Force nor DOD will have the performance data needed to oversee efforts intended to improve the Air Force's ability to locate, manage, and account for assets throughout their life cycle.
In 1962, DOD instituted the Planning, Programming, and Budgeting System to establish near-term projections in defense spending. This system was intended to provide the necessary data to assist defense decision makers in making trade-offs among potential alternatives, thereby resulting in the best possible mix of forces, equipment, and support to accomplish DOD’s mission. The military services and other DOD components developed the detailed data projections for the budget year in which funds were being requested and at least the 4 succeeding years and provided them to the Office of the Secretary of Defense. The resulting projections were compiled and recorded in a 5-year plan. In 1987, Congress directed the Secretary of Defense to submit the five-year defense program (currently referred to as the future years defense program, or FYDP) used by the Secretary in formulating the estimated expenditures and proposed appropriations included in the President’s annual budget to support DOD programs, projects and activities. The FYDP, which is submitted annually to Congress, is considered the official report that fulfills this legislative requirement. The Office of Program Analysis and Evaluation has responsibility for the assembly and distribution of the FYDP. The Office of the Under Secretary of Defense (Comptroller) has responsibility for the annual budget justification material that is presented to Congress. These offices work collaboratively to ensure that the data presented in the budget justification material and the FYDP are equivalent at the appropriation account level. The FYDP provides DOD and Congress a tool for looking at future funding needs beyond immediate budget priorities and can be considered a long- term capital plan. As GAO has previously reported, leading practices in capital decision making include developing a long-term capital plan to guide implementation of organizational goals and objectives and help decision makers establish priorities over the long term. In 2002, Congress directed the Department of Homeland Security to begin developing a future budget plan modeled after DOD’s FYDP. In the 2001 QDR Report, DOD established a new defense strategy and shifted the basis of defense planning from a “threat-based” model to a “capabilities-based” model. According to the QDR report, the capabilities- based model is intended to focus more on how an adversary might fight rather than specifically on whom the adversary might be or where a war might occur. The report further states that in adopting a capabilities-based approach, the United States must identify the capabilities required to deter and defeat adversaries, maintain its military advantage, and transform its forces and institutions. The QDR report also outlined a new risk management framework to use in considering trade-offs among defense objectives and resource constraints. This framework consists of four dimensions of risk: Force management–the ability to recruit, retain, train, and equip sufficient numbers of quality personnel and sustain the readiness of the force while accomplishing its many operational tasks; Operational–the ability to achieve military objectives in a near-term conflict or other contingency; Future challenges–the ability to invest in new capabilities and develop new operational concepts needed to dissuade or defeat mid- to long-term military challenges; and Institutional–the ability to develop management practices and controls that use resources efficiently and promote the effective operation of the Defense establishment. These risk areas will form the basis for DOD’s annual performance goals and for tracking associated performance results. Moreover, the QDR states that an assessment of the capabilities needed to counter both current and future threats must be included in DOD’s approach to assessing and mitigating risk. The FYDP provides Congress visibility of broad DOD funding shifts and priorities regarding thousands of programs that have been aggregated, or grouped, by appropriation category. For example, we noted that DOD increases its Research, Development, Test and Evaluation (RDT&E) account category and decreases other account categories in the 2004 FYDP. Other funding shifts/priorities are less visible because the FYDP report, organized by program, cannot display some specific costs that are important to decision makers, such as funding for DOD’s civilian workforce. Moreover, the FYDP is a reflection of the limitations of DOD’s budget preparation process. For example, as we have reported in the past, the FYDP reflects DOD’s overly optimistic estimations of future program costs that often lead to costs being understated. Such understatements may have implications for many programs beyond the years covered by the FYDP. Finally, the costs of ongoing operations in Iraq and Afghanistan, which have been funded through supplemental appropriations, are not projected in the FYDP thereby limiting the visibility over these funds. The administration is expected to request additional supplemental funds in calendar year 2005 according to DOD officials. Although some costs are difficult to predict, DOD expects costs to become more predictable later this year. However, some requirements it plans to fund with the supplemental appropriation have already been identified. The FYDP was designed to provide resource information at the program level that could be aggregated a variety of ways including up to the appropriation category level. For individual programs, this means that decision makers have visibility over planned funding for 4 or 5 years beyond the current budget year. Similarly, the programs can be aggregated in a variety of ways to analyze future funding trends. For example, our comparison of the 2003 FYDP to the 2004 FYDP provides visibility of funding shifts that DOD made at the appropriation category level, specifically showing that over the common years of both FYDPs, DOD plans to increase funding in its RDT&E appropriation category, while in most years decreasing funds to Procurement, Military Construction, Military Personnel, and Operation and Maintenance. According to DOD officials, this shift toward RDT&E reflects DOD’s emphasis on transforming military forces. Since the FYDP does not clearly identify those programs DOD considers transformational, we could not validate this claim. Figure 1 shows the changes made between the 2003 and 2004 FYDPs to the department’s appropriation categories for the common 4-year period, 2004–2007. Appendix II provides a more detailed table. Compared to the 2003 FYDP, funding in the Operation and Maintenance appropriation category in the 2004 FYDP was reduced by at least $9 billion per year from 2004 through 2007 for a total of $42 billion over that period. About $41 billion of that decrease is accounted for by the elimination of the Defense Emergency Response Fund, which had projected over $10 billion in funding each year for 2004 through 2007 in the 2003 FYDP, but had no funding in the 2004 FYDP for those years. Over those same years, the “Other DOD accounts” category increased by a total of $19 billion. The increase in these categories was mainly fueled by a $22 billion increase in the Defense Health Program, which was offset somewhat by a decrease in Revolving Management Funds. Although DOD’s policy priorities can be discerned at the appropriation level, some important funding categories cannot be identified because program elements, the most basic components of the report, are intended to capture the total cost of the program, as opposed to individual costs that comprise the program. For example, funding for spare parts, civilian personnel, and information technology are included in funding for individual programs and cannot be readily extracted from them. Congress has expressed interest in all of these funding categories. We note that DOD officials stated that these funding categories are delineated in other reports to Congress. Program elements that encompass multiple systems, such as the Army’s Future Combat Systems and DOD’s Ballistic Missile Defense System, could also limit visibility over funding trends and trade-offs in the FYDP. For example, in its 2004 budget justification material, the administration requested funding for the Army’s Future Combat Systems—often referred to as a “system of systems”—under a single program element. In the National Defense Authorization Act for Fiscal Year 2004, Congress rejected the single program element and instead required the Secretary of Defense to break Future Combat Systems into three program elements. In the conference report accompanying the bill, the conferees noted that “the high cost and high risk require congressional oversight which can be better accomplished through the application of separate and distinct program elements for the [Future Combat System].” In another example, DOD had proposed that Congress repeal its requirement for specifying Ballistic Missile Defense System program elements. According to DOD’s legislative proposal, this would coincide with the Secretary of Defense’s goal to establish a single program that allows allocating and re-allocating of funds among competing priorities within the program. While Congress provided the administration flexibility for specifying program elements related to Ballistic Missile Defense, it nonetheless noted that budget reporting for Ballistic Missile Defense under one program element would be inappropriate. Since the mid-1980s, we have reported a limitation in DOD’s budget formulation—the use of overly optimistic planning assumptions. Such overly optimistic assumptions limit the visibility of costs projected throughout the FYDP period and beyond. As a result, DOD has too many programs for the available dollars, which often leads to program instability, costly program stretch-outs, and program termination. For example, in January 2003, we reported that the estimated cost of developing eight major weapon systems had increased from about $47 billion in fiscal year 1998 to about $72 billion by fiscal year 2003. We currently expect DOD’s funding needs in some areas to be higher than the estimates in the FYDP. The following are some examples of anticipated cost increases based on recent reports where we made recommendations to improve the management and costs estimates of these programs. As we reported in April 2003, cost increases have been a factor in the Air Force substantially decreasing the number of F/A-22 Raptors to be purchased—from 648 to 276. Moreover, current budget estimates, which exceed mandated cost limitations, are dependent on billions of dollars of cost offset initiatives which, if not achieved as planned, will further increase program costs. In addition, GAO considers continued acquisition of this aircraft at increasing annual rates before adequate testing is completed to be a high-risk strategy that could further increase production costs. DOD has not required the services to set aside funds to support the procurement and maintenance of elements of the Ballistic Missile Defense System. Management of this “system of systems” was shifted from the services to the Department’s Missile Defense Agency in January 2002, but procurement and maintenance costs will be borne by the services as elements of the system demonstrate sufficient maturity to enter into full- rate production. In April 2003, we concluded that because DOD had not yet set aside funds to cover its long-term costs, the department could find that it cannot afford to procure and maintain that system unless it reduces or eliminates its investment in other important weapons systems. We recommended that the Secretary of Defense explore the option of requiring the services to set aside funds for this purpose in the FYDP. DOD concurred with this recommendation, noting that doing so would not only promote the stability of the overall defense budget but would also significantly improve the likelihood that an element or component would actually be fielded. Since its inception in fiscal year 1986, DOD’s $24 billion chemical demilitarization program (a 2001 estimate) has been plagued by frequent schedule delays, cost overruns, and continuing management problems. In October 2003, we testified that program officials had raised preliminary total program cost estimates by $1.4 billion and that other factors, yet to be considered, could raise these estimates even more. In written comments on a draft of this report, DOD strongly objected to our conclusion that DOD has historically employed overly optimistic assumptions and noted that these statements do not reflect recent efforts to correct this problem. In August 2001, DOD established guidance that all major acquisition programs should be funded to the Cost Analysis Improvement Group estimates, which, according to DOD, have historically been far more accurate than Service estimates. However, as DOD acknowledges in its written comments, there is currently no auditable data available to document the effects of this guidance; therefore, we could not analyze this claim. Further, GAO reports issued after a draft of this report was sent to DOD – such as our March 2004 report on the Air Force’s F/A- 22 program and our April 2004 testimony on DOD’s Chemical Demilitarization program – continue to raise questions about DOD’s planning assumptions. For example, in our F/A-22 report, we continued to observe that additional increases in development costs for the F/A-22 are likely and in our report on DOD’s Chemical Demilitarization Program, we observed that the program continues to fall behind schedule milestones. Some of the examples listed above will have budgetary impacts beyond the 2009 end date of the 2004 FYDP. As the Congressional Budget Office (CBO) reported in January 2003, “programs to develop weapon systems often run for a decade or more before those systems are fielded, and other policy decisions have long-term implications; thus, decisions made today can influence the size and composition of the nation’s armed forces for many years to come.” In its February 2004 update to that report, CBO projected that if the programs represented in the 2004 FYDP were carried out as currently envisioned by DOD, demand for resources would grow from the current projection in 2009 of $439 billion to an average demand for resources of $458 billion a year between 2010 and 2022. When CBO assumed that costs for weapons programs and certain other activities would continue to grow as they have historically rather than as DOD currently projects, CBO’s projections increased to an average of $473 billion a year through 2009 and an average of $533 billion between 2010 and 2022. The FYDP does not include future costs for ongoing operations when these operations are funded through supplemental appropriations. Since the attacks of September 11, 2001, DOD has received supplemental appropriations totaling $158 billion in constant 2004 dollars to support operations in Iraq, Afghanistan, and elsewhere, as well as to initially recover and respond to the terrorist attacks. This amount exceeds the $99 billion DOD received in supplemental appropriations throughout all of the 1990s and is more than what DOD requested for its entire Operation and Maintenance account for fiscal year 2004. Table 1 summarizes these supplemental appropriations. In presentations related to the 2005 President’s budget submitted to Congress in early February 2004, DOD officials reported that the budget does not include funding for ongoing operations in Iraq and Afghanistan, and they expect another supplemental will be needed in January 2005 to finance incremental costs for these operations. Senior DOD officials indicated that operations in Iraq and Afghanistan will continue into fiscal year 2005, but the requirements and costs of these continued operations are difficult to estimate because of uncertainties surrounding the political situations in these regions. However, they noted that funding estimates will likely become clearer over the course of the year. For example, the Under Secretary of Defense (Comptroller) stated that by July 2004, the operations in Iraq and Afghanistan may be better defined and that having time to analyze expenditures will help in making more realistic projections. In addition, Service and DOD officials have already identified some requirements that have associated costs. For example, the Army has been authorized to temporarily increase its end strength by 30,000 soldiers. In briefings on the 2005 budget request, DOD and Army officials stated that they intended to partially fund this additional end strength with the supplemental appropriation anticipated for 2005. DOD, with congressional approval, has used different approaches in the past to fund operations. For example, in the former Yugoslavia, DOD funded operations begun in fiscal year 1996 through a combination of transfers between DOD accounts, absorbing costs within accounts, and supplemental appropriations. However, in 1997, Congress established the Overseas Contingency Operations Transfer Fund, which provided funding to DOD rather than directly to the individual military services, and allowed DOD to manage the funding of contingency operations among the military services more effectively and with some flexibility. In 2002, DOD determined that funding for operations in the former Yugoslavia were sufficiently stable to be included directly in appropriation account requests. GAO observed in a 1994 report that if an operation continued into a new fiscal year, it would seem appropriate that DOD would build the expected costs of that operation into its budget and allow Congress to expressly authorize and appropriate funds for its continuation. We continue to hold this view. The FYDP, as currently structured, does not contain a link to defense capabilities or the dimensions of the risk management framework, both important QDR initiatives, limiting the FYDP’s usefulness and congressional visibility of the initiatives’ implementation. Further, although DOD is considering how to link resources to these initiatives, it does not have specific plans to make these linkages in the FYDP. The Major Force Programs, initially developed as the fundamental framework of the FYDP, remain virtually unchanged and are not representative of DOD’s capabilities-based approach. Furthermore, additional program aggregations that DOD created in the FYDP’s structure do not capture information related to capabilities-based analysis or the risk management framework in part because these concepts have not been fully developed. DOD has modified the FYDP over time to create new categories of program elements; however, it currently does not include categorizations that are intended to relate to the QDR’s initiatives regarding defense capabilities and the risk management framework. Major Force Programs, originally established to organize the FYDP into the major DOD missions, have remained virtually the same in the five decades since their introduction, do not reflect how DOD combat forces and their missions have changed over time, and do not organize the FYDP by major defense capabilities. For example, the Major Force Program of General Purpose Forces includes large numbers of programs with varied capabilities that would complicate comparisons needed for understanding defense capabilities and associated trade-off decisions inherent in risk management. General Purpose Forces include virtually all conventional forces within DOD and slightly over one-third of DOD funding is allocated to this broad category. Ground combat units, tactical air forces, and combatant ships are among the wide array of forces considered General Purpose Forces. Including forces with such diverse capabilities in the same category diminishes the Major Force Program’s usefulness to DOD and Congress for identifying trade-offs among programs. Additionally, all available resources with comparable capabilities are not categorized in the same Major Force Program. For example, the Major Force Program structure identifies Guard and Reserve forces separately despite the fact that today Guard and Reserve forces are integrated into their respective Service’s force structure, deploy and fight with the general forces, and have some of the same capabilities. Over time, as decision makers needed information not captured in the Major Force Programs, DOD created new aggregations of program elements and added attributes to the FYDP’s structure. The most recent aggregation categorized the data by force and infrastructure categories, which were developed to relate every dollar, person, and piece of equipment in the FYDP to either forces or infrastructure. This model groups forces, the warfighting tools of the Combatant Commanders, into broad operational categories according to their intended use (such as homeland defense or intelligence operations), and groups infrastructure, the set of activities needed to create and sustain forces, based upon the type of support activity it performs (such as force installations or central logistics). DOD has also added attribute fields to the program elements for such activities as space and management headquarters in order to capture the resources associated with specific areas of interest. However, these new aggregations and attributes were not intended to relate the FYDP’s resources to defense capabilities or the risk management framework. According to officials, DOD does not have specific plans to link capabilities and the risk management framework to the FYDP, in part, because these concepts have not been fully developed. For example, capability-based analysis is still under development. DOD officials describe this as a complex process—representing a fundamental shift in the basis of defense planning and requiring the participation of all DOD components. In the past, DOD focused on whom an adversary might be, whereas the current approach focuses on how future adversaries might fight. DOD’s April 2003 Transformation Planning Guidance states that joint operating concepts will provide the construct for a new capabilities-based resource allocation process. To date, these joint operating concepts have not been formalized. According to DOD officials, while some concepts may be completed near-term, the overall initiative is expected to take 4 to 5 years to complete. Furthermore, although the risk management framework has been better defined than the capabilities have, it also has not been fully implemented because it has not been fully linked to resources. In December 2002, DOD instructed its components to begin displaying the linkage of plans, outputs, and resources in future budget justification material based upon the four dimensions of its risk management framework. According to DOD officials, in the fiscal year 2005 budget submission, DOD provided this linkage for 40 percent of its resources. DOD plans to complete this process by fiscal year 2007, but does not currently have plans to link the risk management framework to the FYDP as part of this process. DOD’s 2003 Annual Report provided an example of how the FYDP could be linked to the risk management framework using the Force and Infrastructure categories. However, according to DOD officials, this example was intended to be a rough aggregation for a specific performance metric and is not officially recognized as the most appropriate way to show how DOD’s resources link to the risk management framework. Therefore, this linkage has not been integrated into the FYDP’s structure. It is important for DOD and congressional decision makers to have the most complete information possible on the costs of ongoing operations as they deliberate the budget. In a previous report, we observed that if an operation continues into a new fiscal year, it would seem appropriate that DOD would build the expected costs of that operation into its budget and allow Congress to expressly authorize and appropriate funds for its continuation. We recognize that defining those expected costs is challenging and that supplemental appropriations are sometimes necessary. Nonetheless, the consequences of not considering the expected costs of ongoing operations as part of larger budget deliberations will mean that neither the administration nor congressional decision makers will have the opportunity to fully examine budget implications of the global war on terrorism. Indeed, the FYDP could be a useful tool for weighing the costs of defense priorities such as the global war on terrorism and DOD’s transformation efforts. However, as a reflection of the budget, the FYDP is weakened in this regard because it does not include known or likely costs of ongoing operations funded through supplemental appropriations. Without a clear understanding of such costs, members of Congress cannot make informed decisions about appropriations between competing priorities. Additionally, the FYDP as it is currently structured does not provide either DOD or Congress with full visibility over how resources are allocated according to key tenets of the defense strategy outlined in the QDR. As a result, resource allocations may not reflect the priorities of the defense strategy, including its new capabilities-based approach and the risk management framework. Yet, the current strategic environment and growing demand for resources require that DOD and Congress allocate resources according to the highest defense priorities. Indeed, as the common report that captures all components’ future program and budget proposals, the FYDP provides DOD an option for linking resource plans to its risk management framework and capabilities assessment and providing that information to Congress. Furthermore, this linkage could provide a crosswalk between capabilities and the risk management framework such that assessments of capabilities could be made in terms of the risk management framework, which balances dimensions of risk, such as near term operational risk versus risks associated with mid- to long-term military challenges. In the interest of providing Congress greater visibility over projected defense spending, we recommend that the Secretary of Defense direct the Undersecretary of Defense (Comptroller) to take the following two actions: (1) provide Congress data on known or likely costs for ongoing operations that are expected to extend into fiscal year 2005 for consideration during its deliberation over DOD’s fiscal year 2005 budget request and accompanying FYDP and (2) include known or likely projected costs of ongoing operations for the fiscal year 2006 and subsequent budget requests and accompanying FYDPs. To enhance the effectiveness of the FYDP as a tool for planning and analysis in the current strategic environment, the Secretary of Defense should direct the Office of Program Analysis and Evaluation to take the following two actions: (1) align the program elements in the FYDP to defense capabilities needed to meet the defense strategy, as these capabilities are identified and approved, and the dimensions of the risk management framework and include this alignment with the FYDP provided to Congress, and (2) report funding levels for defense capabilities and the dimensions of the risk framework in its summary FYDP report to Congress. In written comments on a draft of this report, DOD provided some general overarching comments concerning our characterization of the FYDP as a database, as well as other comments responding to our specific recommendations. First, DOD noted that it had redefined the FYDP as a report rather than a database, and stated that it maintains a variety of databases to support decision making that should not be confused with the FYDP itself. DOD stated that our characterization of the FYDP as a database resulted in a misinterpretation that pervades our draft report and results in incorrect assertions and conclusions. We have updated our report to refer to the FYDP as a report rather than a database in response to the definition change provided in DOD’s April 2004 guidance – issued after our draft report was sent to DOD for comment. However, we disagree with the DOD statement that characterizing the FYDP as a flexible database structure leads to incorrect assertions and conclusions. Whether the FYDP is referred to as a database or a report, it is an existing tool used to inform analyses, as DOD acknowledged in its written comments, and it has been modified over time to capture resource information associated with special areas of interest. Although a variety of databases are maintained by DOD to support decision making, the FYDP is submitted annually to Congress, as required. Therefore, we believe that our recommendations that DOD provide Congress with greater information in fiscal year 2005 and beyond on known or likely costs of operations, and enhance the FYDP as a tool in the new strategic environment provide practical solutions for improving congressional visibility of DOD’s allocation of resources, as discussed below. DOD neither concurred nor nonconcurred with the recommendations that the Undersecretary of Defense (Comptroller) provide Congress data on known or likely costs for ongoing operations that are expected to extend into fiscal year 2005 and beyond. DOD stated that it already provides this information to Congress as soon as it is sufficiently reliable and that, at this point in the war on terrorism, current operations are too fluid to permit an accurate determination of the amount of funding required a year in advance. In response to our statement that DOD does not include the costs of ongoing operations funded through supplemental appropriations, DOD further stated that items funded through supplemental appropriations are above and beyond resources budgeted and appropriated for peacetime operations and that funding requirements for wartime and contingency operations are driven by events and situations that DOD cannot anticipate. We are encouraged that DOD agrees with the principle of providing these data to Congress as soon as they are sufficiently reliable. As we reported, DOD indicated that operations in Iraq and Afghanistan will continue into fiscal year 2005; therefore, it is reasonable that DOD would anticipate some costs associated with these operations. However, DOD did not budget any funds for these operations in its fiscal year 2005 budget request or accompanying FYDP submitted to Congress. Based on statements by the Undersecretary of Defense (Comptroller) that cost data will become clearer as the year progresses, we expect that DOD will be able to provide such data to Congress for both the fiscal year 2005 and 2006 budget deliberations. In addition, some requirements that have associated costs, such as the Army’s temporary increase in endstrength, have already been identified. We acknowledge in our report the challenges associated with estimating costs for ongoing operations. Although DOD states that including these estimates would unnecessarily complicate resource discussions and decisions, we maintain that the challenges of estimating costs for ongoing operations must be weighed against Congress’s responsibility for balancing government-wide funding priorities using the best available data at the time of its budget deliberations. Lastly, DOD nonconcurred with our recommendations for the Office of Program Analysis and Evaluation to align the program elements in the FYDP with defense capabilities and the risk management framework and include this alignment with the FYDP provided to Congress. DOD stated that it does not use the FYDP as a tool to conduct analyses of capability or risk trade-offs between systems, as such a tool would be relatively uninformative and needlessly complex, though the FYDP does inform those analyses. DOD also said it does not intend to embed capabilities or the risk management framework in the FYDP, as these constructs are still being developed and may change significantly, but it is working to create decision-support tools that will link resource allocations to capability and performance metrics, and it may be able to report on those allocations as the tools and processes mature. We maintain our view that the FYDP is the ideal vehicle for providing information on these new concepts to Congress. First, since the FYDP already exists as a legally mandated reporting mechanism, it avoids the creation of any duplicative reporting. Second, because the FYDP cuts across all the services and agencies, it provides a macro picture of DOD resource allocations in terms of both missions and appropriations. Third, as we note in our report, because the FYDP is flexible, DOD has periodically built new categories of program elements into it to provide decision makers with resource information as needed. Currently, Congress cannot use the FYDP to identify the results of DOD’s resource analyses of capabilities or risk trade-offs between programs because these relationships are not aligned with the program elements in the FYDP. We recognize that the FYDP is not the only tool available for defense resource decision making; however, we note, as DOD has stated in its written comments, that the FYDP informs analyses and reflects the resource implications of decisions. While we recognize that DOD is still working to define these concepts, we maintain our view that, once defined, reporting these relationships with the FYDP provided to Congress would improve congressional visibility of DOD resource allocations. DOD’s comments are included in their entirety in appendix III. Annotated evaluations of DOD’s comments are also included in appendix III. We are sending copies of this report to the Secretary of Defense; the Undersecretary of Defense (Comptroller); and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-9619. Major contributors to this report are listed in appendix IV. We determined that the automated FYDP data was sufficiently reliable for use in meeting this report’s objectives. DOD checks the FYDP data against its budget request sent to Congress at the appropriation category level. We also compared the FYDP data with published documents DOD provided to ensure that the automated data correctly represented DOD’s budget request. Specifically, we compared total budget estimates, appropriation totals, military and civilian personnel levels, force structure levels, and some specific program information. Based on our and DOD’s comparison, we were satisfied that the automated FYDP data and published data were in agreement. GAO has designated DOD’s financial management area as high risk due to long-standing deficiencies in DOD’s systems, processes, and internal controls. Since some of these systems provide the data used in the budgeting process, there are limitations to the FYDP’s use. However, since we determined the FYDP accurately represents DOD’s budget request, it is sufficiently reliable as used for this report. To determine whether the FYDP provides visibility over DOD funding priorities we compared DOD reports and Secretary of Defense congressional testimonies that supported the 2003 and 2004 budget submissions against FYDP data. We also analyzed resource data from the 2003 and 2004 FYDPs for fiscal years 2004 – 2007 to identify trends. We adjusted the current dollars to constant 2004 dollars using appropriate DOD Comptroller inflation indexes to eliminate the effects of inflation. To determine whether the FYDP provides visibility over likely future budget requests, we reviewed other related GAO, Congressional Research Service, and Congressional Budget Office reports and interviewed program and budget officials at the Office of the Secretary of Defense and service headquarters. In addition, we summarized documents related to supplemental appropriations and analyzed DOD officials’ statements regarding plans for supplemental appropriations in 2005. To determine whether the FYDP is useful for implementing DOD’s risk management framework and capabilities based planning, we interviewed appropriate officials at the Office of the Secretary of Defense, service headquarters, and the Institute for Defense Analyses— the organization currently under contract to make improvements to the FYDP, and examined various DOD planning and budget documents including the 2001 Report of the Quadrennial Defense Review, DOD’s 2003 Annual Report to the President and the Congress, and DOD’s fiscal year 2003 and 2004 budget submissions. We also examined the structure of the FYDP to determine if it currently included, or was possible to include, a link to the risk management framework or defense capabilities. Our review was conducted between June 2003 and February 2004 in accordance with generally accepted government auditing standards. Other DOD programs include chemical agent and munitions destruction, the defense health program, drug interdiction and counter-drug activities, and the Office of the Inspector General. The following are GAO’s comments on the Department of Defense’s letter dated April 13, 2004. 1. DOD objected to our observation that DOD has historically employed overly optimistic planning assumptions in its budget formulations. In response to its comments, we acknowledged DOD guidance to reduce future resource shortfalls on page 11 of this report and noted the lack of auditable data to document the effects of this guidance. We also provided additional examples of GAO reports that continue to raise questions about DOD’s planning assumptions. 2. DOD provided a rationale for growth in civilian personnel costs. We intended civilian personnel to be an example of costs not visible in the FYDP, as opposed to an example of cost growth. Therefore, we have clarified the language on page 2 of this report to reflect this point. Further, we are not proposing that civilian personnel costs be disassociated from programs, as suggested by DOD’s comments. 3. DOD reiterated that it already provides Congress reliable information on the known costs for ongoing operations as soon as it is available. As we stated in our evaluation of agency comments on page 19 of this report, we are encouraged that DOD agrees with the principle of providing these data to Congress as soon as they are sufficiently reliable. However, we note that cost data is expected to become clearer as the year progresses and some requirements that have associated costs have already been identified. Therefore, we expect that DOD will be able to provide such data to Congress for their fiscal year 2005 and 2006 budget deliberations. 4. DOD noted that our report implied that the cost of increased Army force structure has been fully identified and asserted that, to the contrary, the work to define the particulars of this plan in sufficient detail to support budget development is still in progress. However, we note that in February 2004, the Undersecretary of Defense (Comptroller) outlined an Army force-restructuring plan that would be partially funded through the existing fiscal year 2004 supplemental appropriation. While DOD may not have fully defined the particulars of this plan, since it has identified a funding timeline, we believe that at least some of the cost of increased Army force structure can be estimated at this time. 5. Based on comments from the Air Force, DOD asked that we clarify that the reduction in the number of F/A-22 aircraft being purchased was not largely due to cost increases, and it referred to the role played by two Quadrennial Defense Reviews in the decision. Our report stated, however, that cost increases have been one factor in the Air Force’s substantially decreasing the number of F/A-22 Raptors to be purchased – from 648 to 276. Moreover, development costs have increased dramatically and in a report that was issued after this draft was sent to DOD for comment, GAO continued to observe that additional increases in development costs for the F/A-22 are likely. We maintain our view that the F/A-22 program illustrates that DOD’s funding needs in some areas exceed the estimates used in the FYDP. 6. Based on comments from the Air Force, DOD challenged our implication that the F/A-22 program will exceed mandated cost limitations if billions of dollars of cost offset initiatives are not achieved as planned. In February 2003, we reported that the Air Force has had some success in implementing cost reduction plans to offset cost growth. However, production improvement programs, also designed to offset costs, have faced recent funding cutbacks and therefore are unlikely to offset cost growth as planned. The Air Force stated that it has no intention of violating mandated cost limitations, but that it does intend to seek relief from them as part of the fiscal year 2006 President’s budget. To the extent that the Air Force requests additional funds for the F/A-22, our view that the FYDP understates costs is further confirmed. In addition to the person named above, Patricia Lentini, Margaret Best, Barbara Gannon, Christine Fossett, Tom Mahalek, Betsy Morris, Ricardo Marquez, Jane Hunt, and Michael Zola also made major contributions to this report.
Congress needs the best available data about DOD's resource tradeoffs between the dual priorities of transformation and fighting the global war on terrorism. To help shape its priorities, in 2001 DOD developed a capabilities-based approach focused on how future adversaries might fight, and a risk management framework to ensure that current defense needs are balanced against future requirements. Because the Future Years Defense Program (FYDP) is DOD's centralized report providing DOD and Congress data on current and planned resource allocations, GAO assessed the extent to which the FYDP provides Congress visibility over (1) projected defense spending and (2) implementation of DOD's capabilities-based defense strategy and risk management framework. The FYDP provides Congress with mixed visibility over DOD's projected spending for the current budget year and at least four succeeding years. On the one hand, it provides visibility over many programs that can be aggregated so decision makers can see DOD's broad funding priorities by showing shifts in appropriation categories. On the other hand, in some areas DOD likely understates the future costs of programs in the FYDP because it has historically employed overly optimistic planning assumptions in its budget formulations. As such, DOD has too many programs for the available dollars, which often leads to program instability, costly program stretchouts, and delayed program termination decisions. Also, the FYDP does not reflect costs of ongoing operations funded through supplemental appropriations. Since September 2001, DOD has received $158 billion in supplemental appropriations to support the global war on terrorism, and DOD expects to request another supplemental in January 2005 to cover operations in Iraq and Afghanistan. While DOD officials stated they are uncertain of the amount of the request, some requirements they intend to fund with the supplemental appropriation have already been identified, such as temporarily increasing the Army's force structure. Defining costs during ongoing operations is challenging and supplemental appropriations are sometimes necessary; however, not considering the known or likely costs of ongoing operations expected to continue into the new fiscal year as part of larger budget deliberations will preclude DOD and congressional decision makers from fully examining the budget implications of the global war on terrorism. The FYDP provides Congress limited visibility over important DOD initiatives. While DOD is considering how to link resources to defense capabilities and the risk management framework, it does not have specific plans to make these linkages in the FYDP, in part because the initiatives have not been fully defined or implemented. Because the FYDP lacks these linkages, decision makers cannot use it to determine how a proposed increase in capability would affect the risk management framework, which balances dimensions of risk, such as near term operational risk versus risks associated with mid- to long-term military challenges.